Oct 13 00:00:34.771142 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 13 00:00:34.771164 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Oct 12 22:32:01 -00 2025 Oct 13 00:00:34.771174 kernel: KASLR enabled Oct 13 00:00:34.771180 kernel: efi: EFI v2.7 by EDK II Oct 13 00:00:34.771185 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 13 00:00:34.771191 kernel: random: crng init done Oct 13 00:00:34.771198 kernel: secureboot: Secure boot disabled Oct 13 00:00:34.771204 kernel: ACPI: Early table checksum verification disabled Oct 13 00:00:34.771210 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 13 00:00:34.771217 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 13 00:00:34.771224 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771230 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771235 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771242 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771249 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771256 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771262 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771268 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771274 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:00:34.771281 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 13 00:00:34.771287 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 00:00:34.771293 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:00:34.771299 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 13 00:00:34.771305 kernel: Zone ranges: Oct 13 00:00:34.771312 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:00:34.771319 kernel: DMA32 empty Oct 13 00:00:34.771325 kernel: Normal empty Oct 13 00:00:34.771331 kernel: Device empty Oct 13 00:00:34.771337 kernel: Movable zone start for each node Oct 13 00:00:34.771343 kernel: Early memory node ranges Oct 13 00:00:34.771349 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 13 00:00:34.771355 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 13 00:00:34.771362 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 13 00:00:34.771368 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 13 00:00:34.771374 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 13 00:00:34.771380 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 13 00:00:34.771386 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 13 00:00:34.771393 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 13 00:00:34.771399 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 13 00:00:34.771405 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 13 00:00:34.771414 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 13 00:00:34.771421 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 13 00:00:34.771427 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 13 00:00:34.771435 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:00:34.771442 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 13 00:00:34.771448 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 13 00:00:34.771455 kernel: psci: probing for conduit method from ACPI. Oct 13 00:00:34.771461 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 00:00:34.771468 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 00:00:34.771474 kernel: psci: Trusted OS migration not required Oct 13 00:00:34.771480 kernel: psci: SMC Calling Convention v1.1 Oct 13 00:00:34.771487 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 13 00:00:34.771493 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 00:00:34.771501 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 00:00:34.771508 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 13 00:00:34.771514 kernel: Detected PIPT I-cache on CPU0 Oct 13 00:00:34.771521 kernel: CPU features: detected: GIC system register CPU interface Oct 13 00:00:34.771527 kernel: CPU features: detected: Spectre-v4 Oct 13 00:00:34.771534 kernel: CPU features: detected: Spectre-BHB Oct 13 00:00:34.771540 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 00:00:34.771547 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 00:00:34.771553 kernel: CPU features: detected: ARM erratum 1418040 Oct 13 00:00:34.771560 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 00:00:34.771566 kernel: alternatives: applying boot alternatives Oct 13 00:00:34.771573 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:00:34.771581 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 00:00:34.771588 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 00:00:34.771595 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 00:00:34.771601 kernel: Fallback order for Node 0: 0 Oct 13 00:00:34.771608 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 13 00:00:34.771614 kernel: Policy zone: DMA Oct 13 00:00:34.771620 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 00:00:34.771627 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 13 00:00:34.771634 kernel: software IO TLB: area num 4. Oct 13 00:00:34.771644 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 13 00:00:34.771654 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 13 00:00:34.771662 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 00:00:34.771669 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 00:00:34.771675 kernel: rcu: RCU event tracing is enabled. Oct 13 00:00:34.771682 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 00:00:34.771689 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 00:00:34.771695 kernel: Tracing variant of Tasks RCU enabled. Oct 13 00:00:34.771702 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 00:00:34.771708 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 00:00:34.771715 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:00:34.771722 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:00:34.771729 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 00:00:34.771736 kernel: GICv3: 256 SPIs implemented Oct 13 00:00:34.771743 kernel: GICv3: 0 Extended SPIs implemented Oct 13 00:00:34.771749 kernel: Root IRQ handler: gic_handle_irq Oct 13 00:00:34.771756 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 13 00:00:34.771762 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 13 00:00:34.771768 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 13 00:00:34.771775 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 13 00:00:34.771781 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 13 00:00:34.771788 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 13 00:00:34.771795 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 13 00:00:34.771801 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 13 00:00:34.771808 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 00:00:34.771815 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:00:34.771875 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 13 00:00:34.771882 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 13 00:00:34.771889 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 13 00:00:34.771896 kernel: arm-pv: using stolen time PV Oct 13 00:00:34.771902 kernel: Console: colour dummy device 80x25 Oct 13 00:00:34.771909 kernel: ACPI: Core revision 20240827 Oct 13 00:00:34.771916 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 13 00:00:34.771923 kernel: pid_max: default: 32768 minimum: 301 Oct 13 00:00:34.771930 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 00:00:34.771939 kernel: landlock: Up and running. Oct 13 00:00:34.771946 kernel: SELinux: Initializing. Oct 13 00:00:34.771952 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:00:34.771965 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:00:34.771973 kernel: rcu: Hierarchical SRCU implementation. Oct 13 00:00:34.771980 kernel: rcu: Max phase no-delay instances is 400. Oct 13 00:00:34.771987 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 00:00:34.771994 kernel: Remapping and enabling EFI services. Oct 13 00:00:34.772000 kernel: smp: Bringing up secondary CPUs ... Oct 13 00:00:34.772013 kernel: Detected PIPT I-cache on CPU1 Oct 13 00:00:34.772020 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 13 00:00:34.772027 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 13 00:00:34.772036 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:00:34.772043 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 13 00:00:34.772050 kernel: Detected PIPT I-cache on CPU2 Oct 13 00:00:34.772057 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 13 00:00:34.772064 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 13 00:00:34.772073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:00:34.772079 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 13 00:00:34.772086 kernel: Detected PIPT I-cache on CPU3 Oct 13 00:00:34.772094 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 13 00:00:34.772101 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 13 00:00:34.772108 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:00:34.772115 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 13 00:00:34.772122 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 00:00:34.772129 kernel: SMP: Total of 4 processors activated. Oct 13 00:00:34.772137 kernel: CPU: All CPU(s) started at EL1 Oct 13 00:00:34.772144 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 00:00:34.772151 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 00:00:34.772158 kernel: CPU features: detected: Common not Private translations Oct 13 00:00:34.772165 kernel: CPU features: detected: CRC32 instructions Oct 13 00:00:34.772172 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 13 00:00:34.772179 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 00:00:34.772186 kernel: CPU features: detected: LSE atomic instructions Oct 13 00:00:34.772193 kernel: CPU features: detected: Privileged Access Never Oct 13 00:00:34.772201 kernel: CPU features: detected: RAS Extension Support Oct 13 00:00:34.772208 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 00:00:34.772215 kernel: alternatives: applying system-wide alternatives Oct 13 00:00:34.772222 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 13 00:00:34.772230 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Oct 13 00:00:34.772237 kernel: devtmpfs: initialized Oct 13 00:00:34.772244 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 00:00:34.772251 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 00:00:34.772258 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 00:00:34.772266 kernel: 0 pages in range for non-PLT usage Oct 13 00:00:34.772273 kernel: 508560 pages in range for PLT usage Oct 13 00:00:34.772280 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 00:00:34.772287 kernel: SMBIOS 3.0.0 present. Oct 13 00:00:34.772294 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 13 00:00:34.772301 kernel: DMI: Memory slots populated: 1/1 Oct 13 00:00:34.772308 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 00:00:34.772315 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 00:00:34.772322 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 00:00:34.772331 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 00:00:34.772338 kernel: audit: initializing netlink subsys (disabled) Oct 13 00:00:34.772345 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Oct 13 00:00:34.772352 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 00:00:34.772359 kernel: cpuidle: using governor menu Oct 13 00:00:34.772366 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 00:00:34.772373 kernel: ASID allocator initialised with 32768 entries Oct 13 00:00:34.772380 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 00:00:34.772387 kernel: Serial: AMBA PL011 UART driver Oct 13 00:00:34.772395 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 00:00:34.772402 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 00:00:34.772409 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 00:00:34.772416 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 00:00:34.772423 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 00:00:34.772430 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 00:00:34.772437 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 00:00:34.772444 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 00:00:34.772450 kernel: ACPI: Added _OSI(Module Device) Oct 13 00:00:34.772459 kernel: ACPI: Added _OSI(Processor Device) Oct 13 00:00:34.772466 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 00:00:34.772473 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 00:00:34.772480 kernel: ACPI: Interpreter enabled Oct 13 00:00:34.772487 kernel: ACPI: Using GIC for interrupt routing Oct 13 00:00:34.772493 kernel: ACPI: MCFG table detected, 1 entries Oct 13 00:00:34.772500 kernel: ACPI: CPU0 has been hot-added Oct 13 00:00:34.772507 kernel: ACPI: CPU1 has been hot-added Oct 13 00:00:34.772514 kernel: ACPI: CPU2 has been hot-added Oct 13 00:00:34.772521 kernel: ACPI: CPU3 has been hot-added Oct 13 00:00:34.772530 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 13 00:00:34.772537 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 00:00:34.772544 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 00:00:34.772670 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 00:00:34.772737 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 00:00:34.772799 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 00:00:34.772885 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 13 00:00:34.772950 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 13 00:00:34.772966 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 13 00:00:34.772974 kernel: PCI host bridge to bus 0000:00 Oct 13 00:00:34.773044 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 13 00:00:34.773100 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 13 00:00:34.773153 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 13 00:00:34.773206 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 00:00:34.773285 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 13 00:00:34.773357 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 00:00:34.773419 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 13 00:00:34.773481 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 13 00:00:34.773541 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 13 00:00:34.773600 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 13 00:00:34.773660 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 13 00:00:34.773721 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 13 00:00:34.773775 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 13 00:00:34.773839 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 13 00:00:34.773894 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 13 00:00:34.773903 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 13 00:00:34.773911 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 13 00:00:34.773918 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 13 00:00:34.773927 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 13 00:00:34.773934 kernel: iommu: Default domain type: Translated Oct 13 00:00:34.773941 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 00:00:34.773948 kernel: efivars: Registered efivars operations Oct 13 00:00:34.773954 kernel: vgaarb: loaded Oct 13 00:00:34.773969 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 00:00:34.773976 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 00:00:34.773984 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 00:00:34.773990 kernel: pnp: PnP ACPI init Oct 13 00:00:34.774069 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 13 00:00:34.774079 kernel: pnp: PnP ACPI: found 1 devices Oct 13 00:00:34.774086 kernel: NET: Registered PF_INET protocol family Oct 13 00:00:34.774093 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 00:00:34.774100 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 00:00:34.774107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 00:00:34.774114 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 00:00:34.774121 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 00:00:34.774130 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 00:00:34.774137 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:00:34.774144 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:00:34.774151 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 00:00:34.774158 kernel: PCI: CLS 0 bytes, default 64 Oct 13 00:00:34.774165 kernel: kvm [1]: HYP mode not available Oct 13 00:00:34.774172 kernel: Initialise system trusted keyrings Oct 13 00:00:34.774179 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 00:00:34.774186 kernel: Key type asymmetric registered Oct 13 00:00:34.774194 kernel: Asymmetric key parser 'x509' registered Oct 13 00:00:34.774201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 00:00:34.774208 kernel: io scheduler mq-deadline registered Oct 13 00:00:34.774215 kernel: io scheduler kyber registered Oct 13 00:00:34.774222 kernel: io scheduler bfq registered Oct 13 00:00:34.774229 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 13 00:00:34.774373 kernel: ACPI: button: Power Button [PWRB] Oct 13 00:00:34.774385 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 13 00:00:34.774479 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 13 00:00:34.774495 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 00:00:34.774502 kernel: thunder_xcv, ver 1.0 Oct 13 00:00:34.774509 kernel: thunder_bgx, ver 1.0 Oct 13 00:00:34.774516 kernel: nicpf, ver 1.0 Oct 13 00:00:34.774523 kernel: nicvf, ver 1.0 Oct 13 00:00:34.774598 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 00:00:34.774656 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T00:00:34 UTC (1760313634) Oct 13 00:00:34.774665 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 00:00:34.774672 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 00:00:34.774681 kernel: watchdog: NMI not fully supported Oct 13 00:00:34.774688 kernel: watchdog: Hard watchdog permanently disabled Oct 13 00:00:34.774695 kernel: NET: Registered PF_INET6 protocol family Oct 13 00:00:34.774702 kernel: Segment Routing with IPv6 Oct 13 00:00:34.774709 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 00:00:34.774716 kernel: NET: Registered PF_PACKET protocol family Oct 13 00:00:34.774723 kernel: Key type dns_resolver registered Oct 13 00:00:34.774730 kernel: registered taskstats version 1 Oct 13 00:00:34.774737 kernel: Loading compiled-in X.509 certificates Oct 13 00:00:34.774746 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: b8447a1087a9e9c4d5b9d4c2f2bba5a69a74f139' Oct 13 00:00:34.774753 kernel: Demotion targets for Node 0: null Oct 13 00:00:34.774760 kernel: Key type .fscrypt registered Oct 13 00:00:34.774767 kernel: Key type fscrypt-provisioning registered Oct 13 00:00:34.774774 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 00:00:34.774781 kernel: ima: Allocated hash algorithm: sha1 Oct 13 00:00:34.774788 kernel: ima: No architecture policies found Oct 13 00:00:34.774795 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 00:00:34.774803 kernel: clk: Disabling unused clocks Oct 13 00:00:34.774810 kernel: PM: genpd: Disabling unused power domains Oct 13 00:00:34.774817 kernel: Warning: unable to open an initial console. Oct 13 00:00:34.774843 kernel: Freeing unused kernel memory: 38976K Oct 13 00:00:34.774850 kernel: Run /init as init process Oct 13 00:00:34.774857 kernel: with arguments: Oct 13 00:00:34.774864 kernel: /init Oct 13 00:00:34.774871 kernel: with environment: Oct 13 00:00:34.774878 kernel: HOME=/ Oct 13 00:00:34.774885 kernel: TERM=linux Oct 13 00:00:34.774894 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 00:00:34.774902 systemd[1]: Successfully made /usr/ read-only. Oct 13 00:00:34.774912 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:00:34.774921 systemd[1]: Detected virtualization kvm. Oct 13 00:00:34.774928 systemd[1]: Detected architecture arm64. Oct 13 00:00:34.774935 systemd[1]: Running in initrd. Oct 13 00:00:34.774943 systemd[1]: No hostname configured, using default hostname. Oct 13 00:00:34.774952 systemd[1]: Hostname set to . Oct 13 00:00:34.774966 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:00:34.774975 systemd[1]: Queued start job for default target initrd.target. Oct 13 00:00:34.774982 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:00:34.774990 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:00:34.774998 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 00:00:34.775006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:00:34.775013 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 00:00:34.775024 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 00:00:34.775032 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 00:00:34.775040 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 00:00:34.775048 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:00:34.775055 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:00:34.775063 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:00:34.775070 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:00:34.775078 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:00:34.775086 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:00:34.775093 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:00:34.775101 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:00:34.775109 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 00:00:34.775116 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 00:00:34.775124 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:00:34.775131 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:00:34.775140 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:00:34.775147 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:00:34.775155 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 00:00:34.775163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:00:34.775170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 00:00:34.775178 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 00:00:34.775186 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 00:00:34.775193 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:00:34.775201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:00:34.775209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:00:34.775217 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 00:00:34.775225 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:00:34.775233 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 00:00:34.775242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:00:34.775268 systemd-journald[244]: Collecting audit messages is disabled. Oct 13 00:00:34.775287 systemd-journald[244]: Journal started Oct 13 00:00:34.775306 systemd-journald[244]: Runtime Journal (/run/log/journal/80cdca93d13f4b9290ca273dc40347d9) is 6M, max 48.5M, 42.4M free. Oct 13 00:00:34.779919 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 00:00:34.779958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:00:34.766327 systemd-modules-load[246]: Inserted module 'overlay' Oct 13 00:00:34.784254 systemd-modules-load[246]: Inserted module 'br_netfilter' Oct 13 00:00:34.786115 kernel: Bridge firewalling registered Oct 13 00:00:34.786133 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:00:34.787313 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:00:34.788633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:00:34.793185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:00:34.795119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:00:34.799004 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:00:34.803587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:00:34.809792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:00:34.814337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:00:34.815052 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 00:00:34.818136 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:00:34.822314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:00:34.823698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:00:34.827003 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 00:00:34.848023 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:00:34.864856 systemd-resolved[287]: Positive Trust Anchors: Oct 13 00:00:34.864872 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:00:34.864904 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:00:34.869902 systemd-resolved[287]: Defaulting to hostname 'linux'. Oct 13 00:00:34.870838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:00:34.875675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:00:34.918859 kernel: SCSI subsystem initialized Oct 13 00:00:34.922850 kernel: Loading iSCSI transport class v2.0-870. Oct 13 00:00:34.930862 kernel: iscsi: registered transport (tcp) Oct 13 00:00:34.943848 kernel: iscsi: registered transport (qla4xxx) Oct 13 00:00:34.943865 kernel: QLogic iSCSI HBA Driver Oct 13 00:00:34.960542 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:00:34.983299 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:00:34.985604 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:00:35.032065 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 00:00:35.034346 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 00:00:35.100857 kernel: raid6: neonx8 gen() 15793 MB/s Oct 13 00:00:35.117850 kernel: raid6: neonx4 gen() 15799 MB/s Oct 13 00:00:35.134849 kernel: raid6: neonx2 gen() 13196 MB/s Oct 13 00:00:35.151846 kernel: raid6: neonx1 gen() 10416 MB/s Oct 13 00:00:35.168855 kernel: raid6: int64x8 gen() 6899 MB/s Oct 13 00:00:35.185847 kernel: raid6: int64x4 gen() 7122 MB/s Oct 13 00:00:35.202847 kernel: raid6: int64x2 gen() 6101 MB/s Oct 13 00:00:35.220010 kernel: raid6: int64x1 gen() 5049 MB/s Oct 13 00:00:35.220026 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s Oct 13 00:00:35.238055 kernel: raid6: .... xor() 12329 MB/s, rmw enabled Oct 13 00:00:35.238078 kernel: raid6: using neon recovery algorithm Oct 13 00:00:35.244319 kernel: xor: measuring software checksum speed Oct 13 00:00:35.244336 kernel: 8regs : 20691 MB/sec Oct 13 00:00:35.244345 kernel: 32regs : 21676 MB/sec Oct 13 00:00:35.244985 kernel: arm64_neon : 28109 MB/sec Oct 13 00:00:35.245000 kernel: xor: using function: arm64_neon (28109 MB/sec) Oct 13 00:00:35.296862 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 00:00:35.303371 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:00:35.305936 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:00:35.334077 systemd-udevd[499]: Using default interface naming scheme 'v255'. Oct 13 00:00:35.338124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:00:35.340046 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 00:00:35.363804 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Oct 13 00:00:35.386365 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:00:35.388726 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:00:35.452370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:00:35.455943 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 00:00:35.503805 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 13 00:00:35.503998 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 13 00:00:35.511514 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 00:00:35.511566 kernel: GPT:9289727 != 19775487 Oct 13 00:00:35.511576 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 00:00:35.511585 kernel: GPT:9289727 != 19775487 Oct 13 00:00:35.511952 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 00:00:35.513262 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:00:35.514585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:00:35.514707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:00:35.522342 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:00:35.526331 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:00:35.543436 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 00:00:35.556640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:00:35.558144 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 00:00:35.568949 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 00:00:35.582086 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 00:00:35.583603 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 00:00:35.594068 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:00:35.595349 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:00:35.597533 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:00:35.599871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:00:35.602742 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 00:00:35.605074 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 00:00:35.624928 disk-uuid[594]: Primary Header is updated. Oct 13 00:00:35.624928 disk-uuid[594]: Secondary Entries is updated. Oct 13 00:00:35.624928 disk-uuid[594]: Secondary Header is updated. Oct 13 00:00:35.628098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:00:35.631853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:00:35.634847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:00:36.639848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:00:36.640127 disk-uuid[599]: The operation has completed successfully. Oct 13 00:00:36.660140 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 00:00:36.661065 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 00:00:36.695407 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 00:00:36.715909 sh[614]: Success Oct 13 00:00:36.728859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 00:00:36.728932 kernel: device-mapper: uevent: version 1.0.3 Oct 13 00:00:36.730864 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 00:00:36.738847 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 00:00:36.764678 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 00:00:36.767780 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 00:00:36.789839 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 00:00:36.796624 kernel: BTRFS: device fsid e4495086-3456-43e0-be7b-4c3c53a67174 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (626) Oct 13 00:00:36.796661 kernel: BTRFS info (device dm-0): first mount of filesystem e4495086-3456-43e0-be7b-4c3c53a67174 Oct 13 00:00:36.796671 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:00:36.801844 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 00:00:36.801896 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 00:00:36.803048 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 00:00:36.804529 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:00:36.806039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 00:00:36.806876 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 00:00:36.808559 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 00:00:36.831990 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Oct 13 00:00:36.832040 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:00:36.834086 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:00:36.836848 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:00:36.836908 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:00:36.842846 kernel: BTRFS info (device vda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:00:36.843907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 00:00:36.846515 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 00:00:36.913863 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:00:36.917439 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:00:36.950536 systemd-networkd[806]: lo: Link UP Oct 13 00:00:36.950548 systemd-networkd[806]: lo: Gained carrier Oct 13 00:00:36.951382 systemd-networkd[806]: Enumeration completed Oct 13 00:00:36.951678 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:00:36.953195 ignition[703]: Ignition 2.22.0 Oct 13 00:00:36.951789 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:00:36.953203 ignition[703]: Stage: fetch-offline Oct 13 00:00:36.951794 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:00:36.953235 ignition[703]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:00:36.952788 systemd-networkd[806]: eth0: Link UP Oct 13 00:00:36.953243 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:00:36.952892 systemd-networkd[806]: eth0: Gained carrier Oct 13 00:00:36.953324 ignition[703]: parsed url from cmdline: "" Oct 13 00:00:36.952902 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:00:36.953327 ignition[703]: no config URL provided Oct 13 00:00:36.954435 systemd[1]: Reached target network.target - Network. Oct 13 00:00:36.953332 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:00:36.953338 ignition[703]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:00:36.953360 ignition[703]: op(1): [started] loading QEMU firmware config module Oct 13 00:00:36.953366 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 00:00:36.961456 ignition[703]: op(1): [finished] loading QEMU firmware config module Oct 13 00:00:36.976881 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:00:36.978818 ignition[703]: parsing config with SHA512: 22b7147ef5671810da6cb68ab9819d3ad7885a5b9caa3b3f5eac599cff41db1c04f20bf8df45a17168f55fc05f9b637cb0a9ba3c758cb9c0965619e63724af12 Oct 13 00:00:36.983629 unknown[703]: fetched base config from "system" Oct 13 00:00:36.983643 unknown[703]: fetched user config from "qemu" Oct 13 00:00:36.984050 ignition[703]: fetch-offline: fetch-offline passed Oct 13 00:00:36.984119 ignition[703]: Ignition finished successfully Oct 13 00:00:36.987238 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:00:36.988986 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 00:00:36.989901 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 00:00:37.021298 ignition[815]: Ignition 2.22.0 Oct 13 00:00:37.021317 ignition[815]: Stage: kargs Oct 13 00:00:37.021460 ignition[815]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:00:37.021468 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:00:37.024446 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 00:00:37.022094 ignition[815]: kargs: kargs passed Oct 13 00:00:37.022146 ignition[815]: Ignition finished successfully Oct 13 00:00:37.027009 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 00:00:37.058669 ignition[823]: Ignition 2.22.0 Oct 13 00:00:37.058687 ignition[823]: Stage: disks Oct 13 00:00:37.058862 ignition[823]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:00:37.058873 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:00:37.061347 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 00:00:37.059492 ignition[823]: disks: disks passed Oct 13 00:00:37.063624 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 00:00:37.059537 ignition[823]: Ignition finished successfully Oct 13 00:00:37.065394 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 00:00:37.067116 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:00:37.069080 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:00:37.070804 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:00:37.073792 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 00:00:37.097265 systemd-resolved[287]: Detected conflict on linux IN A 10.0.0.51 Oct 13 00:00:37.097280 systemd-resolved[287]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Oct 13 00:00:37.101081 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 13 00:00:37.106861 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 00:00:37.109362 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 00:00:37.175864 kernel: EXT4-fs (vda9): mounted filesystem 1aa1d0b4-cbac-4728-b9e0-662fa574e9ad r/w with ordered data mode. Quota mode: none. Oct 13 00:00:37.176506 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 00:00:37.177986 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 00:00:37.181446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:00:37.184020 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 00:00:37.185102 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 00:00:37.185148 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 00:00:37.185177 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:00:37.198658 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 00:00:37.201543 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 00:00:37.207099 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Oct 13 00:00:37.207125 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:00:37.207135 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:00:37.209854 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:00:37.209903 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:00:37.211852 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:00:37.239977 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 00:00:37.245041 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Oct 13 00:00:37.249447 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 00:00:37.253725 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 00:00:37.329906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 00:00:37.332934 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 00:00:37.334587 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 00:00:37.354870 kernel: BTRFS info (device vda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:00:37.372015 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 00:00:37.388472 ignition[954]: INFO : Ignition 2.22.0 Oct 13 00:00:37.388472 ignition[954]: INFO : Stage: mount Oct 13 00:00:37.390319 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:00:37.390319 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:00:37.390319 ignition[954]: INFO : mount: mount passed Oct 13 00:00:37.390319 ignition[954]: INFO : Ignition finished successfully Oct 13 00:00:37.391397 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 00:00:37.394604 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 00:00:37.794837 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 00:00:37.796369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:00:37.814861 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Oct 13 00:00:37.818393 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:00:37.818413 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:00:37.821222 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:00:37.821252 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:00:37.822758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:00:37.855519 ignition[984]: INFO : Ignition 2.22.0 Oct 13 00:00:37.855519 ignition[984]: INFO : Stage: files Oct 13 00:00:37.857544 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:00:37.857544 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:00:37.857544 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Oct 13 00:00:37.861391 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 00:00:37.861391 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 00:00:37.864316 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 00:00:37.864316 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 00:00:37.864316 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 00:00:37.863759 unknown[984]: wrote ssh authorized keys file for user: core Oct 13 00:00:37.869789 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Oct 13 00:00:37.869789 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 00:00:37.873533 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:00:37.873533 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:00:37.873533 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:00:37.880089 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:00:37.880089 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:00:37.880089 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 13 00:00:38.224941 systemd-networkd[806]: eth0: Gained IPv6LL Oct 13 00:00:38.263468 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 13 00:00:38.641915 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:00:38.644273 ignition[984]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Oct 13 00:00:38.645958 ignition[984]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:00:38.648270 ignition[984]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:00:38.648270 ignition[984]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Oct 13 00:00:38.648270 ignition[984]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 00:00:38.665745 ignition[984]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:00:38.669500 ignition[984]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:00:38.672513 ignition[984]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 00:00:38.672513 ignition[984]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:00:38.672513 ignition[984]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:00:38.672513 ignition[984]: INFO : files: files passed Oct 13 00:00:38.672513 ignition[984]: INFO : Ignition finished successfully Oct 13 00:00:38.674904 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 00:00:38.677990 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 00:00:38.680207 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 00:00:38.695538 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 00:00:38.696791 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 00:00:38.699555 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 00:00:38.702277 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:00:38.705029 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:00:38.705029 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:00:38.704229 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:00:38.708425 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 00:00:38.713025 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 00:00:38.778075 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 00:00:38.778918 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 00:00:38.780571 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 00:00:38.782668 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 00:00:38.784595 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 00:00:38.785507 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 00:00:38.808894 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:00:38.811499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 00:00:38.835519 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:00:38.836943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:00:38.839079 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 00:00:38.840912 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 00:00:38.841061 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:00:38.843671 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 00:00:38.844836 systemd[1]: Stopped target basic.target - Basic System. Oct 13 00:00:38.846803 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 00:00:38.848781 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:00:38.850785 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 00:00:38.852804 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:00:38.855298 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 00:00:38.857192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:00:38.860283 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 00:00:38.862373 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 00:00:38.864688 systemd[1]: Stopped target swap.target - Swaps. Oct 13 00:00:38.866580 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 00:00:38.866719 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:00:38.869549 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:00:38.870796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:00:38.872990 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 00:00:38.873937 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:00:38.876072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 00:00:38.876208 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 00:00:38.878926 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 00:00:38.879066 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:00:38.882041 systemd[1]: Stopped target paths.target - Path Units. Oct 13 00:00:38.883675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 00:00:38.886904 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:00:38.889522 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 00:00:38.891410 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 00:00:38.893679 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 00:00:38.893776 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:00:38.895410 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 00:00:38.895494 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:00:38.897103 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 00:00:38.897230 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:00:38.899043 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 00:00:38.899155 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 00:00:38.901838 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 00:00:38.904400 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 00:00:38.905512 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 00:00:38.905658 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:00:38.907978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 00:00:38.908087 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:00:38.913534 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 00:00:38.919058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 00:00:38.927838 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 00:00:38.935873 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 00:00:38.935994 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 00:00:38.939566 ignition[1040]: INFO : Ignition 2.22.0 Oct 13 00:00:38.939566 ignition[1040]: INFO : Stage: umount Oct 13 00:00:38.941339 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:00:38.941339 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:00:38.941339 ignition[1040]: INFO : umount: umount passed Oct 13 00:00:38.941339 ignition[1040]: INFO : Ignition finished successfully Oct 13 00:00:38.942078 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 00:00:38.942211 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 00:00:38.943483 systemd[1]: Stopped target network.target - Network. Oct 13 00:00:38.945773 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 00:00:38.945870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 00:00:38.947595 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 00:00:38.947655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 00:00:38.949389 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 00:00:38.949449 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 00:00:38.951241 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 00:00:38.951289 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 00:00:38.953066 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 00:00:38.953126 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 00:00:38.954973 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 00:00:38.956790 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 00:00:38.961373 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 00:00:38.961493 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 00:00:38.964857 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 00:00:38.965219 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 00:00:38.965262 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:00:38.969062 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:00:38.970762 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 00:00:38.970959 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 00:00:38.976622 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 00:00:38.976906 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 00:00:38.980022 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 00:00:38.980063 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:00:38.983364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 00:00:38.984803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 00:00:38.984889 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:00:38.987101 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 00:00:38.987156 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:00:38.990310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 00:00:38.990363 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 00:00:38.992684 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:00:38.996578 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 00:00:39.014598 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 00:00:39.014747 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:00:39.017229 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 00:00:39.017326 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 00:00:39.019570 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 00:00:39.019636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 00:00:39.021965 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 00:00:39.022002 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:00:39.023866 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 00:00:39.023923 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:00:39.026919 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 00:00:39.026986 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 00:00:39.029735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 00:00:39.029796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:00:39.033688 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 00:00:39.034842 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 00:00:39.034909 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:00:39.037980 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 00:00:39.038030 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:00:39.041654 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 00:00:39.041705 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:00:39.045239 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 00:00:39.045289 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:00:39.047498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:00:39.047550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:00:39.065464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 00:00:39.065599 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 00:00:39.068015 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 00:00:39.070740 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 00:00:39.081026 systemd[1]: Switching root. Oct 13 00:00:39.106580 systemd-journald[244]: Journal stopped Oct 13 00:00:39.870371 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Oct 13 00:00:39.870422 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 00:00:39.870441 kernel: SELinux: policy capability open_perms=1 Oct 13 00:00:39.870450 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 00:00:39.870459 kernel: SELinux: policy capability always_check_network=0 Oct 13 00:00:39.870467 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 00:00:39.870476 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 00:00:39.870485 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 00:00:39.870494 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 00:00:39.870506 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 00:00:39.870515 kernel: audit: type=1403 audit(1760313639.236:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 00:00:39.870527 systemd[1]: Successfully loaded SELinux policy in 57.983ms. Oct 13 00:00:39.870544 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.502ms. Oct 13 00:00:39.870559 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:00:39.870569 systemd[1]: Detected virtualization kvm. Oct 13 00:00:39.870580 systemd[1]: Detected architecture arm64. Oct 13 00:00:39.870594 systemd[1]: Detected first boot. Oct 13 00:00:39.870603 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:00:39.870613 zram_generator::config[1086]: No configuration found. Oct 13 00:00:39.870625 kernel: NET: Registered PF_VSOCK protocol family Oct 13 00:00:39.870634 systemd[1]: Populated /etc with preset unit settings. Oct 13 00:00:39.870645 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 00:00:39.870655 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 00:00:39.870664 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 00:00:39.870674 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 00:00:39.870684 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 00:00:39.870694 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 00:00:39.870703 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 00:00:39.870715 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 00:00:39.870725 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 00:00:39.870734 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 00:00:39.870745 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 00:00:39.870754 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 00:00:39.870764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:00:39.870774 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:00:39.870784 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 00:00:39.870794 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 00:00:39.870805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 00:00:39.870815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:00:39.870838 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 00:00:39.870865 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:00:39.870877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:00:39.870887 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 00:00:39.870896 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 00:00:39.870908 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 00:00:39.870918 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 00:00:39.870927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:00:39.870937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:00:39.870956 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:00:39.870967 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:00:39.870977 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 00:00:39.870987 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 00:00:39.870997 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 00:00:39.871008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:00:39.871021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:00:39.871031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:00:39.871041 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 00:00:39.871051 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 00:00:39.871061 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 00:00:39.871071 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 00:00:39.871080 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 00:00:39.871090 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 00:00:39.871101 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 00:00:39.871111 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 00:00:39.871122 systemd[1]: Reached target machines.target - Containers. Oct 13 00:00:39.871132 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 00:00:39.871142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:00:39.871152 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:00:39.871162 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 00:00:39.871171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:00:39.871181 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:00:39.871192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:00:39.871202 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 00:00:39.871211 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:00:39.871221 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 00:00:39.871234 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 00:00:39.871243 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 00:00:39.871253 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 00:00:39.871263 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 00:00:39.871274 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:00:39.871284 kernel: loop: module loaded Oct 13 00:00:39.871293 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:00:39.871304 kernel: ACPI: bus type drm_connector registered Oct 13 00:00:39.871312 kernel: fuse: init (API version 7.41) Oct 13 00:00:39.871322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:00:39.871332 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:00:39.871342 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 00:00:39.871351 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 00:00:39.871387 systemd-journald[1168]: Collecting audit messages is disabled. Oct 13 00:00:39.871410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:00:39.871422 systemd-journald[1168]: Journal started Oct 13 00:00:39.871444 systemd-journald[1168]: Runtime Journal (/run/log/journal/80cdca93d13f4b9290ca273dc40347d9) is 6M, max 48.5M, 42.4M free. Oct 13 00:00:39.617887 systemd[1]: Queued start job for default target multi-user.target. Oct 13 00:00:39.642142 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 00:00:39.642581 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 00:00:39.875328 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 00:00:39.875364 systemd[1]: Stopped verity-setup.service. Oct 13 00:00:39.880668 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:00:39.881404 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 00:00:39.882662 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 00:00:39.884052 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 00:00:39.885181 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 00:00:39.886425 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 00:00:39.887787 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 00:00:39.890868 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 00:00:39.892470 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:00:39.894139 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 00:00:39.894325 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 00:00:39.895790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:00:39.896002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:00:39.897615 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:00:39.897786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:00:39.900201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:00:39.900407 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:00:39.901984 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 00:00:39.902159 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 00:00:39.903532 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:00:39.903711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:00:39.905466 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:00:39.907063 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:00:39.908875 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 00:00:39.910537 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 00:00:39.925911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:00:39.928844 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:00:39.932408 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 00:00:39.934865 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 00:00:39.936140 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 00:00:39.936190 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:00:39.938278 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 00:00:39.947781 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 00:00:39.949099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:00:39.950352 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 00:00:39.952687 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 00:00:39.954158 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:00:39.959015 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 00:00:39.960504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:00:39.965075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:00:39.968237 systemd-journald[1168]: Time spent on flushing to /var/log/journal/80cdca93d13f4b9290ca273dc40347d9 is 37.617ms for 873 entries. Oct 13 00:00:39.968237 systemd-journald[1168]: System Journal (/var/log/journal/80cdca93d13f4b9290ca273dc40347d9) is 8M, max 195.6M, 187.6M free. Oct 13 00:00:40.018004 systemd-journald[1168]: Received client request to flush runtime journal. Oct 13 00:00:40.018069 kernel: loop0: detected capacity change from 0 to 200800 Oct 13 00:00:40.018090 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 00:00:39.967618 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 00:00:39.972163 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:00:39.976175 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 00:00:39.979921 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 00:00:39.990533 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 00:00:39.992122 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 00:00:39.995992 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 00:00:39.998992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:00:40.008634 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Oct 13 00:00:40.008645 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Oct 13 00:00:40.012313 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:00:40.015394 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 00:00:40.020481 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 00:00:40.032477 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 00:00:40.046861 kernel: loop1: detected capacity change from 0 to 119368 Oct 13 00:00:40.054297 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 00:00:40.059212 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:00:40.082846 kernel: loop2: detected capacity change from 0 to 100632 Oct 13 00:00:40.087661 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Oct 13 00:00:40.087681 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Oct 13 00:00:40.092003 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:00:40.132870 kernel: loop3: detected capacity change from 0 to 200800 Oct 13 00:00:40.140843 kernel: loop4: detected capacity change from 0 to 119368 Oct 13 00:00:40.147855 kernel: loop5: detected capacity change from 0 to 100632 Oct 13 00:00:40.154638 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 13 00:00:40.155140 (sd-merge)[1228]: Merged extensions into '/usr'. Oct 13 00:00:40.159453 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 00:00:40.159476 systemd[1]: Reloading... Oct 13 00:00:40.216869 zram_generator::config[1254]: No configuration found. Oct 13 00:00:40.293404 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 00:00:40.371087 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 00:00:40.371157 systemd[1]: Reloading finished in 211 ms. Oct 13 00:00:40.405408 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 00:00:40.407119 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 00:00:40.421359 systemd[1]: Starting ensure-sysext.service... Oct 13 00:00:40.423446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:00:40.433755 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Oct 13 00:00:40.433771 systemd[1]: Reloading... Oct 13 00:00:40.440170 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 00:00:40.440534 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 00:00:40.440865 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 00:00:40.441171 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 00:00:40.442049 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 00:00:40.442449 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Oct 13 00:00:40.442575 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Oct 13 00:00:40.446735 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:00:40.446916 systemd-tmpfiles[1289]: Skipping /boot Oct 13 00:00:40.453341 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:00:40.453481 systemd-tmpfiles[1289]: Skipping /boot Oct 13 00:00:40.482862 zram_generator::config[1322]: No configuration found. Oct 13 00:00:40.610695 systemd[1]: Reloading finished in 176 ms. Oct 13 00:00:40.636206 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 00:00:40.642306 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:00:40.651971 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:00:40.654630 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 00:00:40.667746 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 00:00:40.671283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:00:40.676044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:00:40.681147 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 00:00:40.695180 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 00:00:40.697119 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 00:00:40.705705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:00:40.708367 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:00:40.711428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:00:40.723695 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Oct 13 00:00:40.724798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:00:40.726142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:00:40.726330 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:00:40.727895 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 00:00:40.731091 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 00:00:40.733679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:00:40.733867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:00:40.736516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:00:40.736722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:00:40.739319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 00:00:40.742749 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:00:40.742960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:00:40.748246 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 00:00:40.750031 augenrules[1387]: No rules Oct 13 00:00:40.754618 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 00:00:40.758333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:00:40.760886 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:00:40.761125 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:00:40.777859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:00:40.779020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:00:40.780547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:00:40.783002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:00:40.790259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:00:40.793146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:00:40.794259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:00:40.794387 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:00:40.797340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:00:40.798509 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:00:40.810549 systemd[1]: Finished ensure-sysext.service. Oct 13 00:00:40.818413 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 00:00:40.821308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:00:40.821519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:00:40.823379 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:00:40.823609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:00:40.823672 augenrules[1427]: /sbin/augenrules: No change Oct 13 00:00:40.825224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:00:40.825435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:00:40.827434 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:00:40.827615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:00:40.834158 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:00:40.834218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:00:40.838271 augenrules[1456]: No rules Oct 13 00:00:40.840376 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:00:40.840697 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:00:40.849260 systemd-resolved[1356]: Positive Trust Anchors: Oct 13 00:00:40.849554 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:00:40.849590 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:00:40.858969 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 00:00:40.863091 systemd-resolved[1356]: Defaulting to hostname 'linux'. Oct 13 00:00:40.865931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:00:40.867640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:00:40.910745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:00:40.915278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 00:00:40.943676 systemd-networkd[1435]: lo: Link UP Oct 13 00:00:40.943690 systemd-networkd[1435]: lo: Gained carrier Oct 13 00:00:40.944809 systemd-networkd[1435]: Enumeration completed Oct 13 00:00:40.947180 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:00:40.947553 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:00:40.947562 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:00:40.948231 systemd-networkd[1435]: eth0: Link UP Oct 13 00:00:40.948353 systemd-networkd[1435]: eth0: Gained carrier Oct 13 00:00:40.948374 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:00:40.949087 systemd[1]: Reached target network.target - Network. Oct 13 00:00:40.951988 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 00:00:40.954805 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 00:00:40.956160 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 00:00:40.958586 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 00:00:40.962951 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:00:40.964523 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 00:00:40.964914 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:00:40.965930 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 00:00:40.967365 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Oct 13 00:00:40.967501 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 00:00:41.447101 systemd-resolved[1356]: Clock change detected. Flushing caches. Oct 13 00:00:41.447102 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 00:00:41.447141 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 00:00:41.447163 systemd-timesyncd[1442]: Initial clock synchronization to Mon 2025-10-13 00:00:41.446996 UTC. Oct 13 00:00:41.447173 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:00:41.448172 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 00:00:41.450288 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 00:00:41.451538 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 00:00:41.452866 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:00:41.454770 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 00:00:41.457467 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 00:00:41.460731 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 00:00:41.463068 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 00:00:41.464467 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 00:00:41.469628 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 00:00:41.471635 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 00:00:41.474513 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 00:00:41.476306 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 00:00:41.480321 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:00:41.481426 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:00:41.483296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:00:41.483334 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:00:41.485950 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 00:00:41.488857 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 00:00:41.499121 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 00:00:41.503663 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 00:00:41.506110 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 00:00:41.507514 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 00:00:41.508926 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 00:00:41.511470 jq[1495]: false Oct 13 00:00:41.513152 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 00:00:41.515624 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 00:00:41.521357 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 00:00:41.523747 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 00:00:41.525936 extend-filesystems[1496]: Found /dev/vda6 Oct 13 00:00:41.526156 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 00:00:41.527094 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 00:00:41.532064 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 00:00:41.532623 extend-filesystems[1496]: Found /dev/vda9 Oct 13 00:00:41.537032 extend-filesystems[1496]: Checking size of /dev/vda9 Oct 13 00:00:41.539837 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 00:00:41.544407 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 00:00:41.544594 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 00:00:41.544965 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 00:00:41.545140 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 00:00:41.546740 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 00:00:41.546950 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 00:00:41.549828 extend-filesystems[1496]: Resized partition /dev/vda9 Oct 13 00:00:41.561846 jq[1511]: true Oct 13 00:00:41.567432 extend-filesystems[1525]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 00:00:41.572687 update_engine[1507]: I20251013 00:00:41.568714 1507 main.cc:92] Flatcar Update Engine starting Oct 13 00:00:41.573024 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 13 00:00:41.589018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:00:41.590669 dbus-daemon[1493]: [system] SELinux support is enabled Oct 13 00:00:41.591087 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 00:00:41.594852 jq[1528]: true Oct 13 00:00:41.597267 update_engine[1507]: I20251013 00:00:41.597208 1507 update_check_scheduler.cc:74] Next update check in 4m31s Oct 13 00:00:41.599780 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 00:00:41.607197 systemd[1]: Started update-engine.service - Update Engine. Oct 13 00:00:41.612057 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 00:00:41.612080 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 00:00:41.613693 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 00:00:41.613708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 00:00:41.617046 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 00:00:41.646736 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (Power Button) Oct 13 00:00:41.646990 systemd-logind[1504]: New seat seat0. Oct 13 00:00:41.648898 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 00:00:41.655831 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 13 00:00:41.660248 locksmithd[1551]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 00:00:41.666162 extend-filesystems[1525]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 00:00:41.666162 extend-filesystems[1525]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 00:00:41.666162 extend-filesystems[1525]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 13 00:00:41.673056 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Oct 13 00:00:41.668250 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 00:00:41.674130 bash[1552]: Updated "/home/core/.ssh/authorized_keys" Oct 13 00:00:41.668523 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 00:00:41.691832 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 00:00:41.693972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:00:41.697309 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 00:00:41.769156 containerd[1533]: time="2025-10-13T00:00:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 00:00:41.771137 containerd[1533]: time="2025-10-13T00:00:41.771097897Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 00:00:41.781091 containerd[1533]: time="2025-10-13T00:00:41.781044737Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.48µs" Oct 13 00:00:41.781091 containerd[1533]: time="2025-10-13T00:00:41.781090097Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 00:00:41.781207 containerd[1533]: time="2025-10-13T00:00:41.781192217Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 00:00:41.781380 containerd[1533]: time="2025-10-13T00:00:41.781363657Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 00:00:41.781407 containerd[1533]: time="2025-10-13T00:00:41.781386377Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 00:00:41.781426 containerd[1533]: time="2025-10-13T00:00:41.781414217Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:00:41.781488 containerd[1533]: time="2025-10-13T00:00:41.781471457Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:00:41.781488 containerd[1533]: time="2025-10-13T00:00:41.781486217Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:00:41.781740 containerd[1533]: time="2025-10-13T00:00:41.781720257Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:00:41.781740 containerd[1533]: time="2025-10-13T00:00:41.781738457Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:00:41.781781 containerd[1533]: time="2025-10-13T00:00:41.781750617Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:00:41.781781 containerd[1533]: time="2025-10-13T00:00:41.781759417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 00:00:41.781873 containerd[1533]: time="2025-10-13T00:00:41.781853017Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 00:00:41.782213 containerd[1533]: time="2025-10-13T00:00:41.782182577Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:00:41.782248 containerd[1533]: time="2025-10-13T00:00:41.782231297Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:00:41.782272 containerd[1533]: time="2025-10-13T00:00:41.782249977Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 00:00:41.782847 containerd[1533]: time="2025-10-13T00:00:41.782815537Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 00:00:41.783405 containerd[1533]: time="2025-10-13T00:00:41.783375657Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 00:00:41.783507 containerd[1533]: time="2025-10-13T00:00:41.783488057Z" level=info msg="metadata content store policy set" policy=shared Oct 13 00:00:41.787894 containerd[1533]: time="2025-10-13T00:00:41.787849417Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 00:00:41.787948 containerd[1533]: time="2025-10-13T00:00:41.787925137Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 00:00:41.787948 containerd[1533]: time="2025-10-13T00:00:41.787943777Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 00:00:41.787982 containerd[1533]: time="2025-10-13T00:00:41.787957377Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 00:00:41.787982 containerd[1533]: time="2025-10-13T00:00:41.787970657Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 00:00:41.788043 containerd[1533]: time="2025-10-13T00:00:41.787981817Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 00:00:41.788043 containerd[1533]: time="2025-10-13T00:00:41.787995657Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 00:00:41.788043 containerd[1533]: time="2025-10-13T00:00:41.788007417Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 00:00:41.788089 containerd[1533]: time="2025-10-13T00:00:41.788052657Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 00:00:41.788089 containerd[1533]: time="2025-10-13T00:00:41.788064857Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 00:00:41.788089 containerd[1533]: time="2025-10-13T00:00:41.788074057Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 00:00:41.788089 containerd[1533]: time="2025-10-13T00:00:41.788086537Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788243137Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788275257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788292537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788310017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788323457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788336377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788351657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788362577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788373897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788384017Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788394697Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788584257Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788643297Z" level=info msg="Start snapshots syncer" Oct 13 00:00:41.788718 containerd[1533]: time="2025-10-13T00:00:41.788672057Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 00:00:41.789045 containerd[1533]: time="2025-10-13T00:00:41.788926977Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 00:00:41.789045 containerd[1533]: time="2025-10-13T00:00:41.788986577Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 00:00:41.789157 containerd[1533]: time="2025-10-13T00:00:41.789068897Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 00:00:41.789216 containerd[1533]: time="2025-10-13T00:00:41.789192177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 00:00:41.789245 containerd[1533]: time="2025-10-13T00:00:41.789223017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 00:00:41.789245 containerd[1533]: time="2025-10-13T00:00:41.789234417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 00:00:41.789278 containerd[1533]: time="2025-10-13T00:00:41.789245817Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 00:00:41.789278 containerd[1533]: time="2025-10-13T00:00:41.789259017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 00:00:41.789278 containerd[1533]: time="2025-10-13T00:00:41.789275097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 00:00:41.789331 containerd[1533]: time="2025-10-13T00:00:41.789288857Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 00:00:41.789331 containerd[1533]: time="2025-10-13T00:00:41.789326017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 00:00:41.789365 containerd[1533]: time="2025-10-13T00:00:41.789338817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 00:00:41.789365 containerd[1533]: time="2025-10-13T00:00:41.789350057Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 00:00:41.789485 containerd[1533]: time="2025-10-13T00:00:41.789388297Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:00:41.789485 containerd[1533]: time="2025-10-13T00:00:41.789405577Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:00:41.789485 containerd[1533]: time="2025-10-13T00:00:41.789415137Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:00:41.789485 containerd[1533]: time="2025-10-13T00:00:41.789425097Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:00:41.789485 containerd[1533]: time="2025-10-13T00:00:41.789435937Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 00:00:41.789485 containerd[1533]: time="2025-10-13T00:00:41.789457217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 00:00:41.789485 containerd[1533]: time="2025-10-13T00:00:41.789468017Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 00:00:41.789605 containerd[1533]: time="2025-10-13T00:00:41.789543537Z" level=info msg="runtime interface created" Oct 13 00:00:41.789605 containerd[1533]: time="2025-10-13T00:00:41.789548777Z" level=info msg="created NRI interface" Oct 13 00:00:41.789605 containerd[1533]: time="2025-10-13T00:00:41.789557577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 00:00:41.789605 containerd[1533]: time="2025-10-13T00:00:41.789569297Z" level=info msg="Connect containerd service" Oct 13 00:00:41.789605 containerd[1533]: time="2025-10-13T00:00:41.789596897Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 00:00:41.790494 containerd[1533]: time="2025-10-13T00:00:41.790460417Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:00:41.855775 containerd[1533]: time="2025-10-13T00:00:41.855640817Z" level=info msg="Start subscribing containerd event" Oct 13 00:00:41.855775 containerd[1533]: time="2025-10-13T00:00:41.855710377Z" level=info msg="Start recovering state" Oct 13 00:00:41.855929 containerd[1533]: time="2025-10-13T00:00:41.855806177Z" level=info msg="Start event monitor" Oct 13 00:00:41.855929 containerd[1533]: time="2025-10-13T00:00:41.855932417Z" level=info msg="Start cni network conf syncer for default" Oct 13 00:00:41.855929 containerd[1533]: time="2025-10-13T00:00:41.855942097Z" level=info msg="Start streaming server" Oct 13 00:00:41.856153 containerd[1533]: time="2025-10-13T00:00:41.855961977Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 00:00:41.856153 containerd[1533]: time="2025-10-13T00:00:41.856096297Z" level=info msg="runtime interface starting up..." Oct 13 00:00:41.856153 containerd[1533]: time="2025-10-13T00:00:41.856103257Z" level=info msg="starting plugins..." Oct 13 00:00:41.856153 containerd[1533]: time="2025-10-13T00:00:41.856119737Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 00:00:41.856435 containerd[1533]: time="2025-10-13T00:00:41.856376577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 00:00:41.856487 containerd[1533]: time="2025-10-13T00:00:41.856437777Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 00:00:41.856519 containerd[1533]: time="2025-10-13T00:00:41.856503817Z" level=info msg="containerd successfully booted in 0.087728s" Oct 13 00:00:41.856647 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 00:00:43.118921 systemd-networkd[1435]: eth0: Gained IPv6LL Oct 13 00:00:43.122916 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 00:00:43.125773 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 00:00:43.128879 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 00:00:43.132289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:00:43.142893 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 00:00:43.172064 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 00:00:43.173889 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 00:00:43.175643 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 00:00:43.180139 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 00:00:43.381238 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 00:00:43.403894 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 00:00:43.407675 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 00:00:43.433430 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 00:00:43.433716 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 00:00:43.436527 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 00:00:43.461839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 00:00:43.464761 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 00:00:43.467274 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 00:00:43.468754 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 00:00:43.748765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:00:43.750574 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 00:00:43.752737 systemd[1]: Startup finished in 2.054s (kernel) + 4.626s (initrd) + 4.096s (userspace) = 10.776s. Oct 13 00:00:43.762624 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:00:44.090373 kubelet[1627]: E1013 00:00:44.090254 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:00:44.092429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:00:44.092562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:00:44.092916 systemd[1]: kubelet.service: Consumed 698ms CPU time, 248.1M memory peak. Oct 13 00:00:48.254222 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 00:00:48.255384 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:51948.service - OpenSSH per-connection server daemon (10.0.0.1:51948). Oct 13 00:00:48.331394 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 51948 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:00:48.333691 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:48.342253 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 00:00:48.343231 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 00:00:48.349834 systemd-logind[1504]: New session 1 of user core. Oct 13 00:00:48.370662 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 00:00:48.373464 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 00:00:48.401151 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 00:00:48.406995 systemd-logind[1504]: New session c1 of user core. Oct 13 00:00:48.530590 systemd[1645]: Queued start job for default target default.target. Oct 13 00:00:48.546778 systemd[1645]: Created slice app.slice - User Application Slice. Oct 13 00:00:48.546835 systemd[1645]: Reached target paths.target - Paths. Oct 13 00:00:48.546877 systemd[1645]: Reached target timers.target - Timers. Oct 13 00:00:48.548152 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 00:00:48.558869 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 00:00:48.559164 systemd[1645]: Reached target sockets.target - Sockets. Oct 13 00:00:48.559284 systemd[1645]: Reached target basic.target - Basic System. Oct 13 00:00:48.559426 systemd[1645]: Reached target default.target - Main User Target. Oct 13 00:00:48.559446 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 00:00:48.559552 systemd[1645]: Startup finished in 144ms. Oct 13 00:00:48.560634 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 00:00:48.623409 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:51960.service - OpenSSH per-connection server daemon (10.0.0.1:51960). Oct 13 00:00:48.667068 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 51960 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:00:48.668376 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:48.672865 systemd-logind[1504]: New session 2 of user core. Oct 13 00:00:48.685005 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 00:00:48.736551 sshd[1659]: Connection closed by 10.0.0.1 port 51960 Oct 13 00:00:48.737012 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:48.755167 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:51960.service: Deactivated successfully. Oct 13 00:00:48.758253 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 00:00:48.759750 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Oct 13 00:00:48.761355 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:51974.service - OpenSSH per-connection server daemon (10.0.0.1:51974). Oct 13 00:00:48.762178 systemd-logind[1504]: Removed session 2. Oct 13 00:00:48.820080 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 51974 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:00:48.821405 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:48.825531 systemd-logind[1504]: New session 3 of user core. Oct 13 00:00:48.834005 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 00:00:48.881838 sshd[1668]: Connection closed by 10.0.0.1 port 51974 Oct 13 00:00:48.881878 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:48.897950 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:51974.service: Deactivated successfully. Oct 13 00:00:48.901163 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 00:00:48.901867 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Oct 13 00:00:48.904043 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:51976.service - OpenSSH per-connection server daemon (10.0.0.1:51976). Oct 13 00:00:48.904510 systemd-logind[1504]: Removed session 3. Oct 13 00:00:48.968932 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:00:48.970234 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:48.974024 systemd-logind[1504]: New session 4 of user core. Oct 13 00:00:48.984005 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 00:00:49.036833 sshd[1677]: Connection closed by 10.0.0.1 port 51976 Oct 13 00:00:49.037206 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:49.048966 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:51976.service: Deactivated successfully. Oct 13 00:00:49.051300 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 00:00:49.054013 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Oct 13 00:00:49.057561 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:51978.service - OpenSSH per-connection server daemon (10.0.0.1:51978). Oct 13 00:00:49.058390 systemd-logind[1504]: Removed session 4. Oct 13 00:00:49.117962 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 51978 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:00:49.119206 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:49.123914 systemd-logind[1504]: New session 5 of user core. Oct 13 00:00:49.128979 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 00:00:49.188126 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 00:00:49.188391 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:00:49.205746 sudo[1687]: pam_unix(sudo:session): session closed for user root Oct 13 00:00:49.208121 sshd[1686]: Connection closed by 10.0.0.1 port 51978 Oct 13 00:00:49.207906 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:49.220531 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:51978.service: Deactivated successfully. Oct 13 00:00:49.223399 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 00:00:49.224703 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Oct 13 00:00:49.229087 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). Oct 13 00:00:49.229642 systemd-logind[1504]: Removed session 5. Oct 13 00:00:49.295767 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:00:49.297757 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:49.302976 systemd-logind[1504]: New session 6 of user core. Oct 13 00:00:49.309986 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 00:00:49.367017 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 00:00:49.367292 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:00:49.450583 sudo[1698]: pam_unix(sudo:session): session closed for user root Oct 13 00:00:49.455761 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 00:00:49.456816 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:00:49.469000 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:00:49.508121 augenrules[1720]: No rules Oct 13 00:00:49.509250 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:00:49.509452 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:00:49.511020 sudo[1697]: pam_unix(sudo:session): session closed for user root Oct 13 00:00:49.512533 sshd[1696]: Connection closed by 10.0.0.1 port 51984 Oct 13 00:00:49.513026 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:49.529649 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:51984.service: Deactivated successfully. Oct 13 00:00:49.532439 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 00:00:49.536642 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Oct 13 00:00:49.538576 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:52000.service - OpenSSH per-connection server daemon (10.0.0.1:52000). Oct 13 00:00:49.542148 systemd-logind[1504]: Removed session 6. Oct 13 00:00:49.589292 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 52000 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:00:49.590626 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:49.598764 systemd-logind[1504]: New session 7 of user core. Oct 13 00:00:49.616684 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 00:00:49.669631 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 00:00:49.670483 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:00:49.684440 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 00:00:49.741747 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 00:00:49.742000 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 00:00:50.209981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:00:50.210491 systemd[1]: kubelet.service: Consumed 698ms CPU time, 248.1M memory peak. Oct 13 00:00:50.212418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:00:50.234567 systemd[1]: Reload requested from client PID 1775 ('systemctl') (unit session-7.scope)... Oct 13 00:00:50.234582 systemd[1]: Reloading... Oct 13 00:00:50.313818 zram_generator::config[1820]: No configuration found. Oct 13 00:00:50.497266 systemd[1]: Reloading finished in 262 ms. Oct 13 00:00:50.538696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:00:50.541520 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:00:50.541805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:00:50.541865 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95.1M memory peak. Oct 13 00:00:50.543589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:00:50.689152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:00:50.693653 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:00:50.731264 kubelet[1864]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:00:50.731767 kubelet[1864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:00:50.732230 kubelet[1864]: I1013 00:00:50.732182 1864 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:00:51.636002 kubelet[1864]: I1013 00:00:51.635963 1864 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 00:00:51.636002 kubelet[1864]: I1013 00:00:51.635994 1864 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:00:51.637091 kubelet[1864]: I1013 00:00:51.637062 1864 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 00:00:51.637091 kubelet[1864]: I1013 00:00:51.637083 1864 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:00:51.638835 kubelet[1864]: I1013 00:00:51.637816 1864 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 00:00:51.644706 kubelet[1864]: I1013 00:00:51.644663 1864 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:00:51.654960 kubelet[1864]: I1013 00:00:51.654930 1864 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:00:51.657643 kubelet[1864]: I1013 00:00:51.657610 1864 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 00:00:51.657863 kubelet[1864]: I1013 00:00:51.657833 1864 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:00:51.658041 kubelet[1864]: I1013 00:00:51.657860 1864 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.51","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:00:51.658041 kubelet[1864]: I1013 00:00:51.658030 1864 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:00:51.658041 kubelet[1864]: I1013 00:00:51.658041 1864 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 00:00:51.658178 kubelet[1864]: I1013 00:00:51.658146 1864 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 00:00:51.746923 kubelet[1864]: I1013 00:00:51.746871 1864 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:00:51.748422 kubelet[1864]: I1013 00:00:51.748370 1864 kubelet.go:475] "Attempting to sync node with API server" Oct 13 00:00:51.748422 kubelet[1864]: I1013 00:00:51.748404 1864 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:00:51.749622 kubelet[1864]: I1013 00:00:51.749597 1864 kubelet.go:387] "Adding apiserver pod source" Oct 13 00:00:51.749712 kubelet[1864]: I1013 00:00:51.749669 1864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:00:51.749856 kubelet[1864]: E1013 00:00:51.749828 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:51.749956 kubelet[1864]: E1013 00:00:51.749787 1864 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:51.751189 kubelet[1864]: I1013 00:00:51.751167 1864 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:00:51.751860 kubelet[1864]: I1013 00:00:51.751840 1864 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 00:00:51.751908 kubelet[1864]: I1013 00:00:51.751881 1864 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 00:00:51.751930 kubelet[1864]: W1013 00:00:51.751917 1864 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 00:00:51.755640 kubelet[1864]: I1013 00:00:51.755316 1864 server.go:1262] "Started kubelet" Oct 13 00:00:51.755640 kubelet[1864]: I1013 00:00:51.755543 1864 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:00:51.757131 kubelet[1864]: I1013 00:00:51.756887 1864 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:00:51.757131 kubelet[1864]: I1013 00:00:51.756945 1864 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 00:00:51.757374 kubelet[1864]: I1013 00:00:51.757226 1864 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:00:51.758635 kubelet[1864]: E1013 00:00:51.758113 1864 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:00:51.761488 kubelet[1864]: I1013 00:00:51.761449 1864 server.go:310] "Adding debug handlers to kubelet server" Oct 13 00:00:51.761674 kubelet[1864]: I1013 00:00:51.761612 1864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:00:51.764980 kubelet[1864]: I1013 00:00:51.764943 1864 factory.go:223] Registration of the systemd container factory successfully Oct 13 00:00:51.765087 kubelet[1864]: I1013 00:00:51.765063 1864 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:00:51.766812 kubelet[1864]: I1013 00:00:51.765954 1864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:00:51.766812 kubelet[1864]: I1013 00:00:51.766123 1864 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 00:00:51.766812 kubelet[1864]: E1013 00:00:51.766367 1864 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.51\" not found" Oct 13 00:00:51.767326 kubelet[1864]: I1013 00:00:51.767300 1864 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 00:00:51.767455 kubelet[1864]: I1013 00:00:51.767445 1864 reconciler.go:29] "Reconciler: start to sync state" Oct 13 00:00:51.769202 kubelet[1864]: I1013 00:00:51.769177 1864 factory.go:223] Registration of the containerd container factory successfully Oct 13 00:00:51.774559 kubelet[1864]: E1013 00:00:51.774497 1864 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.51\" not found" node="10.0.0.51" Oct 13 00:00:51.786065 kubelet[1864]: I1013 00:00:51.786035 1864 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:00:51.786065 kubelet[1864]: I1013 00:00:51.786052 1864 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:00:51.786204 kubelet[1864]: I1013 00:00:51.786079 1864 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:00:51.790408 kubelet[1864]: I1013 00:00:51.790370 1864 policy_none.go:49] "None policy: Start" Oct 13 00:00:51.790408 kubelet[1864]: I1013 00:00:51.790407 1864 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 00:00:51.790592 kubelet[1864]: I1013 00:00:51.790420 1864 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 00:00:51.792544 kubelet[1864]: I1013 00:00:51.792515 1864 policy_none.go:47] "Start" Oct 13 00:00:51.797329 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 00:00:51.811527 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 00:00:51.815105 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 00:00:51.824184 kubelet[1864]: I1013 00:00:51.824133 1864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 00:00:51.825227 kubelet[1864]: I1013 00:00:51.825197 1864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 00:00:51.825227 kubelet[1864]: I1013 00:00:51.825226 1864 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 00:00:51.825307 kubelet[1864]: I1013 00:00:51.825259 1864 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 00:00:51.825329 kubelet[1864]: E1013 00:00:51.825305 1864 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:00:51.827493 kubelet[1864]: E1013 00:00:51.827455 1864 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 00:00:51.828970 kubelet[1864]: I1013 00:00:51.828934 1864 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:00:51.829044 kubelet[1864]: I1013 00:00:51.828964 1864 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:00:51.829423 kubelet[1864]: I1013 00:00:51.829408 1864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:00:51.830958 kubelet[1864]: E1013 00:00:51.830914 1864 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:00:51.831033 kubelet[1864]: E1013 00:00:51.830980 1864 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.51\" not found" Oct 13 00:00:51.932100 kubelet[1864]: I1013 00:00:51.931990 1864 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.51" Oct 13 00:00:51.938572 kubelet[1864]: I1013 00:00:51.938530 1864 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.51" Oct 13 00:00:51.954643 kubelet[1864]: I1013 00:00:51.954611 1864 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 13 00:00:51.955111 containerd[1533]: time="2025-10-13T00:00:51.955067977Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 00:00:51.955903 kubelet[1864]: I1013 00:00:51.955700 1864 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 13 00:00:52.206660 sudo[1734]: pam_unix(sudo:session): session closed for user root Oct 13 00:00:52.207881 sshd[1733]: Connection closed by 10.0.0.1 port 52000 Oct 13 00:00:52.208290 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:52.211849 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:52000.service: Deactivated successfully. Oct 13 00:00:52.213866 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 00:00:52.214142 systemd[1]: session-7.scope: Consumed 419ms CPU time, 76.2M memory peak. Oct 13 00:00:52.215126 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Oct 13 00:00:52.216237 systemd-logind[1504]: Removed session 7. Oct 13 00:00:52.642866 kubelet[1864]: I1013 00:00:52.642687 1864 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 13 00:00:52.643023 kubelet[1864]: I1013 00:00:52.642977 1864 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Oct 13 00:00:52.643401 kubelet[1864]: I1013 00:00:52.643052 1864 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Oct 13 00:00:52.643401 kubelet[1864]: I1013 00:00:52.643151 1864 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Oct 13 00:00:52.750145 kubelet[1864]: E1013 00:00:52.750101 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:52.750145 kubelet[1864]: I1013 00:00:52.750117 1864 apiserver.go:52] "Watching apiserver" Oct 13 00:00:52.773921 kubelet[1864]: I1013 00:00:52.773868 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-cni-bin-dir\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774163 kubelet[1864]: I1013 00:00:52.773970 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-cni-log-dir\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774163 kubelet[1864]: I1013 00:00:52.773998 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-lib-modules\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774163 kubelet[1864]: I1013 00:00:52.774014 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e037f5d9-3977-4041-a88c-f00a75e8e420-tigera-ca-bundle\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774163 kubelet[1864]: I1013 00:00:52.774029 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-var-lib-calico\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774163 kubelet[1864]: I1013 00:00:52.774047 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-var-run-calico\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774281 kubelet[1864]: I1013 00:00:52.774086 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpd8\" (UniqueName: \"kubernetes.io/projected/e037f5d9-3977-4041-a88c-f00a75e8e420-kube-api-access-hbpd8\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774281 kubelet[1864]: I1013 00:00:52.774133 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-cni-net-dir\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774281 kubelet[1864]: I1013 00:00:52.774167 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-flexvol-driver-host\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774281 kubelet[1864]: I1013 00:00:52.774199 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e037f5d9-3977-4041-a88c-f00a75e8e420-node-certs\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774281 kubelet[1864]: I1013 00:00:52.774213 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-policysync\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.774367 kubelet[1864]: I1013 00:00:52.774227 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e037f5d9-3977-4041-a88c-f00a75e8e420-xtables-lock\") pod \"calico-node-b5tbc\" (UID: \"e037f5d9-3977-4041-a88c-f00a75e8e420\") " pod="calico-system/calico-node-b5tbc" Oct 13 00:00:52.789157 kubelet[1864]: E1013 00:00:52.788590 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zhm2" podUID="9a5371c7-64f8-48a1-bd35-b322196d1b79" Oct 13 00:00:52.794821 systemd[1]: Created slice kubepods-besteffort-pode037f5d9_3977_4041_a88c_f00a75e8e420.slice - libcontainer container kubepods-besteffort-pode037f5d9_3977_4041_a88c_f00a75e8e420.slice. Oct 13 00:00:52.808952 systemd[1]: Created slice kubepods-besteffort-podabb46d1a_66ca_43b6_970d_50f211aa1a45.slice - libcontainer container kubepods-besteffort-podabb46d1a_66ca_43b6_970d_50f211aa1a45.slice. Oct 13 00:00:52.868345 kubelet[1864]: I1013 00:00:52.868290 1864 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 00:00:52.875005 kubelet[1864]: I1013 00:00:52.874958 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a5371c7-64f8-48a1-bd35-b322196d1b79-kubelet-dir\") pod \"csi-node-driver-8zhm2\" (UID: \"9a5371c7-64f8-48a1-bd35-b322196d1b79\") " pod="calico-system/csi-node-driver-8zhm2" Oct 13 00:00:52.875005 kubelet[1864]: I1013 00:00:52.874999 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a5371c7-64f8-48a1-bd35-b322196d1b79-registration-dir\") pod \"csi-node-driver-8zhm2\" (UID: \"9a5371c7-64f8-48a1-bd35-b322196d1b79\") " pod="calico-system/csi-node-driver-8zhm2" Oct 13 00:00:52.875162 kubelet[1864]: I1013 00:00:52.875022 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftgcq\" (UniqueName: \"kubernetes.io/projected/9a5371c7-64f8-48a1-bd35-b322196d1b79-kube-api-access-ftgcq\") pod \"csi-node-driver-8zhm2\" (UID: \"9a5371c7-64f8-48a1-bd35-b322196d1b79\") " pod="calico-system/csi-node-driver-8zhm2" Oct 13 00:00:52.875162 kubelet[1864]: I1013 00:00:52.875084 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrvmq\" (UniqueName: \"kubernetes.io/projected/abb46d1a-66ca-43b6-970d-50f211aa1a45-kube-api-access-hrvmq\") pod \"kube-proxy-v6dnm\" (UID: \"abb46d1a-66ca-43b6-970d-50f211aa1a45\") " pod="kube-system/kube-proxy-v6dnm" Oct 13 00:00:52.875332 kubelet[1864]: I1013 00:00:52.875293 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9a5371c7-64f8-48a1-bd35-b322196d1b79-varrun\") pod \"csi-node-driver-8zhm2\" (UID: \"9a5371c7-64f8-48a1-bd35-b322196d1b79\") " pod="calico-system/csi-node-driver-8zhm2" Oct 13 00:00:52.875332 kubelet[1864]: I1013 00:00:52.875328 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb46d1a-66ca-43b6-970d-50f211aa1a45-xtables-lock\") pod \"kube-proxy-v6dnm\" (UID: \"abb46d1a-66ca-43b6-970d-50f211aa1a45\") " pod="kube-system/kube-proxy-v6dnm" Oct 13 00:00:52.875382 kubelet[1864]: I1013 00:00:52.875344 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb46d1a-66ca-43b6-970d-50f211aa1a45-lib-modules\") pod \"kube-proxy-v6dnm\" (UID: \"abb46d1a-66ca-43b6-970d-50f211aa1a45\") " pod="kube-system/kube-proxy-v6dnm" Oct 13 00:00:52.875405 kubelet[1864]: I1013 00:00:52.875397 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/abb46d1a-66ca-43b6-970d-50f211aa1a45-kube-proxy\") pod \"kube-proxy-v6dnm\" (UID: \"abb46d1a-66ca-43b6-970d-50f211aa1a45\") " pod="kube-system/kube-proxy-v6dnm" Oct 13 00:00:52.875441 kubelet[1864]: I1013 00:00:52.875430 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a5371c7-64f8-48a1-bd35-b322196d1b79-socket-dir\") pod \"csi-node-driver-8zhm2\" (UID: \"9a5371c7-64f8-48a1-bd35-b322196d1b79\") " pod="calico-system/csi-node-driver-8zhm2" Oct 13 00:00:52.876244 kubelet[1864]: E1013 00:00:52.876098 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.876244 kubelet[1864]: W1013 00:00:52.876118 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.876244 kubelet[1864]: E1013 00:00:52.876149 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.876412 kubelet[1864]: E1013 00:00:52.876391 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.876412 kubelet[1864]: W1013 00:00:52.876406 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.876506 kubelet[1864]: E1013 00:00:52.876420 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.876568 kubelet[1864]: E1013 00:00:52.876553 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.876568 kubelet[1864]: W1013 00:00:52.876563 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.876765 kubelet[1864]: E1013 00:00:52.876571 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.876934 kubelet[1864]: E1013 00:00:52.876908 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.877002 kubelet[1864]: W1013 00:00:52.876980 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.877054 kubelet[1864]: E1013 00:00:52.877044 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.877327 kubelet[1864]: E1013 00:00:52.877314 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.877395 kubelet[1864]: W1013 00:00:52.877383 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.877466 kubelet[1864]: E1013 00:00:52.877454 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.880043 kubelet[1864]: E1013 00:00:52.879016 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.880043 kubelet[1864]: W1013 00:00:52.879044 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.880043 kubelet[1864]: E1013 00:00:52.879062 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.880043 kubelet[1864]: E1013 00:00:52.879233 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.880043 kubelet[1864]: W1013 00:00:52.879240 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.880043 kubelet[1864]: E1013 00:00:52.879259 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.880043 kubelet[1864]: E1013 00:00:52.879381 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.880043 kubelet[1864]: W1013 00:00:52.879388 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.880043 kubelet[1864]: E1013 00:00:52.879395 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.880043 kubelet[1864]: E1013 00:00:52.879515 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.880343 kubelet[1864]: W1013 00:00:52.879522 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.880343 kubelet[1864]: E1013 00:00:52.879530 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.880343 kubelet[1864]: E1013 00:00:52.879753 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.880343 kubelet[1864]: W1013 00:00:52.879767 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.880343 kubelet[1864]: E1013 00:00:52.879781 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.882944 kubelet[1864]: E1013 00:00:52.882839 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.882944 kubelet[1864]: W1013 00:00:52.882860 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.882944 kubelet[1864]: E1013 00:00:52.882888 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.886588 kubelet[1864]: E1013 00:00:52.886545 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.886588 kubelet[1864]: W1013 00:00:52.886569 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.886588 kubelet[1864]: E1013 00:00:52.886589 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.976688 kubelet[1864]: E1013 00:00:52.976578 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.976688 kubelet[1864]: W1013 00:00:52.976603 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.976688 kubelet[1864]: E1013 00:00:52.976621 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.976852 kubelet[1864]: E1013 00:00:52.976811 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.976852 kubelet[1864]: W1013 00:00:52.976819 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.976852 kubelet[1864]: E1013 00:00:52.976829 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.977045 kubelet[1864]: E1013 00:00:52.977011 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.977045 kubelet[1864]: W1013 00:00:52.977040 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.977103 kubelet[1864]: E1013 00:00:52.977049 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.977287 kubelet[1864]: E1013 00:00:52.977254 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.977287 kubelet[1864]: W1013 00:00:52.977267 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.977287 kubelet[1864]: E1013 00:00:52.977276 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.977483 kubelet[1864]: E1013 00:00:52.977447 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.977483 kubelet[1864]: W1013 00:00:52.977459 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.977483 kubelet[1864]: E1013 00:00:52.977467 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.977637 kubelet[1864]: E1013 00:00:52.977610 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.977637 kubelet[1864]: W1013 00:00:52.977621 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.977682 kubelet[1864]: E1013 00:00:52.977629 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.977894 kubelet[1864]: E1013 00:00:52.977880 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.977894 kubelet[1864]: W1013 00:00:52.977891 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.977959 kubelet[1864]: E1013 00:00:52.977899 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.978072 kubelet[1864]: E1013 00:00:52.978058 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.978072 kubelet[1864]: W1013 00:00:52.978070 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.978113 kubelet[1864]: E1013 00:00:52.978081 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.978234 kubelet[1864]: E1013 00:00:52.978223 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.978234 kubelet[1864]: W1013 00:00:52.978232 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.978287 kubelet[1864]: E1013 00:00:52.978241 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.978374 kubelet[1864]: E1013 00:00:52.978363 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.978374 kubelet[1864]: W1013 00:00:52.978372 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.978416 kubelet[1864]: E1013 00:00:52.978380 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.978509 kubelet[1864]: E1013 00:00:52.978500 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.978532 kubelet[1864]: W1013 00:00:52.978509 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.978532 kubelet[1864]: E1013 00:00:52.978516 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.978688 kubelet[1864]: E1013 00:00:52.978659 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.978688 kubelet[1864]: W1013 00:00:52.978670 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.978688 kubelet[1864]: E1013 00:00:52.978677 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.978858 kubelet[1864]: E1013 00:00:52.978847 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.978858 kubelet[1864]: W1013 00:00:52.978857 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.978913 kubelet[1864]: E1013 00:00:52.978864 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.979031 kubelet[1864]: E1013 00:00:52.979019 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.979031 kubelet[1864]: W1013 00:00:52.979029 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.979078 kubelet[1864]: E1013 00:00:52.979037 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.979169 kubelet[1864]: E1013 00:00:52.979158 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.979191 kubelet[1864]: W1013 00:00:52.979168 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.979191 kubelet[1864]: E1013 00:00:52.979175 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.979305 kubelet[1864]: E1013 00:00:52.979295 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.979324 kubelet[1864]: W1013 00:00:52.979305 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.979324 kubelet[1864]: E1013 00:00:52.979313 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.979477 kubelet[1864]: E1013 00:00:52.979468 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.979503 kubelet[1864]: W1013 00:00:52.979479 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.979503 kubelet[1864]: E1013 00:00:52.979486 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.979625 kubelet[1864]: E1013 00:00:52.979616 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.979648 kubelet[1864]: W1013 00:00:52.979625 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.979648 kubelet[1864]: E1013 00:00:52.979633 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.979765 kubelet[1864]: E1013 00:00:52.979755 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.979784 kubelet[1864]: W1013 00:00:52.979765 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.979784 kubelet[1864]: E1013 00:00:52.979773 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.979935 kubelet[1864]: E1013 00:00:52.979924 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.979935 kubelet[1864]: W1013 00:00:52.979934 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.979982 kubelet[1864]: E1013 00:00:52.979942 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.980089 kubelet[1864]: E1013 00:00:52.980079 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.980109 kubelet[1864]: W1013 00:00:52.980088 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.980109 kubelet[1864]: E1013 00:00:52.980096 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.980247 kubelet[1864]: E1013 00:00:52.980237 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.980271 kubelet[1864]: W1013 00:00:52.980247 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.980271 kubelet[1864]: E1013 00:00:52.980255 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.980381 kubelet[1864]: E1013 00:00:52.980371 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.980400 kubelet[1864]: W1013 00:00:52.980381 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.980400 kubelet[1864]: E1013 00:00:52.980388 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.980514 kubelet[1864]: E1013 00:00:52.980505 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.980533 kubelet[1864]: W1013 00:00:52.980514 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.980533 kubelet[1864]: E1013 00:00:52.980521 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.980649 kubelet[1864]: E1013 00:00:52.980639 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.980671 kubelet[1864]: W1013 00:00:52.980648 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.980671 kubelet[1864]: E1013 00:00:52.980655 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.980787 kubelet[1864]: E1013 00:00:52.980778 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.980787 kubelet[1864]: W1013 00:00:52.980787 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.980840 kubelet[1864]: E1013 00:00:52.980802 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.980972 kubelet[1864]: E1013 00:00:52.980961 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.980972 kubelet[1864]: W1013 00:00:52.980971 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.981009 kubelet[1864]: E1013 00:00:52.980979 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.981175 kubelet[1864]: E1013 00:00:52.981164 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.981195 kubelet[1864]: W1013 00:00:52.981176 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.981195 kubelet[1864]: E1013 00:00:52.981185 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.981324 kubelet[1864]: E1013 00:00:52.981314 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.981345 kubelet[1864]: W1013 00:00:52.981325 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.981345 kubelet[1864]: E1013 00:00:52.981333 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.981473 kubelet[1864]: E1013 00:00:52.981463 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.981492 kubelet[1864]: W1013 00:00:52.981473 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.981492 kubelet[1864]: E1013 00:00:52.981481 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.981624 kubelet[1864]: E1013 00:00:52.981613 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.981646 kubelet[1864]: W1013 00:00:52.981624 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.981646 kubelet[1864]: E1013 00:00:52.981632 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.981773 kubelet[1864]: E1013 00:00:52.981762 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.981816 kubelet[1864]: W1013 00:00:52.981773 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.981816 kubelet[1864]: E1013 00:00:52.981781 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.981969 kubelet[1864]: E1013 00:00:52.981956 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.981991 kubelet[1864]: W1013 00:00:52.981968 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.981991 kubelet[1864]: E1013 00:00:52.981977 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.982127 kubelet[1864]: E1013 00:00:52.982117 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.982150 kubelet[1864]: W1013 00:00:52.982127 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.982150 kubelet[1864]: E1013 00:00:52.982136 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.982277 kubelet[1864]: E1013 00:00:52.982267 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.982297 kubelet[1864]: W1013 00:00:52.982277 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.982297 kubelet[1864]: E1013 00:00:52.982286 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.982464 kubelet[1864]: E1013 00:00:52.982453 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.982484 kubelet[1864]: W1013 00:00:52.982464 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.982484 kubelet[1864]: E1013 00:00:52.982472 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.982627 kubelet[1864]: E1013 00:00:52.982617 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.982647 kubelet[1864]: W1013 00:00:52.982627 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.982647 kubelet[1864]: E1013 00:00:52.982637 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.982798 kubelet[1864]: E1013 00:00:52.982782 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.982864 kubelet[1864]: W1013 00:00:52.982852 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.982896 kubelet[1864]: E1013 00:00:52.982865 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.983071 kubelet[1864]: E1013 00:00:52.983059 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.983096 kubelet[1864]: W1013 00:00:52.983071 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.983096 kubelet[1864]: E1013 00:00:52.983080 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.983240 kubelet[1864]: E1013 00:00:52.983229 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.983259 kubelet[1864]: W1013 00:00:52.983240 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.983259 kubelet[1864]: E1013 00:00:52.983248 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.983478 kubelet[1864]: E1013 00:00:52.983464 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.983478 kubelet[1864]: W1013 00:00:52.983476 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.983534 kubelet[1864]: E1013 00:00:52.983485 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.983893 kubelet[1864]: E1013 00:00:52.983865 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.983893 kubelet[1864]: W1013 00:00:52.983889 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.983953 kubelet[1864]: E1013 00:00:52.983900 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.984063 kubelet[1864]: E1013 00:00:52.984053 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.984084 kubelet[1864]: W1013 00:00:52.984063 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.984084 kubelet[1864]: E1013 00:00:52.984071 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.984250 kubelet[1864]: E1013 00:00:52.984238 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.984269 kubelet[1864]: W1013 00:00:52.984249 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.984269 kubelet[1864]: E1013 00:00:52.984258 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.984410 kubelet[1864]: E1013 00:00:52.984399 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.984429 kubelet[1864]: W1013 00:00:52.984410 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.984429 kubelet[1864]: E1013 00:00:52.984419 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.984594 kubelet[1864]: E1013 00:00:52.984582 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.984613 kubelet[1864]: W1013 00:00:52.984594 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.984613 kubelet[1864]: E1013 00:00:52.984603 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.994722 kubelet[1864]: E1013 00:00:52.994681 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.994722 kubelet[1864]: W1013 00:00:52.994701 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.994722 kubelet[1864]: E1013 00:00:52.994718 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:52.994928 kubelet[1864]: E1013 00:00:52.994912 1864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:52.994928 kubelet[1864]: W1013 00:00:52.994924 1864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:52.995002 kubelet[1864]: E1013 00:00:52.994933 1864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:53.110069 containerd[1533]: time="2025-10-13T00:00:53.109909057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b5tbc,Uid:e037f5d9-3977-4041-a88c-f00a75e8e420,Namespace:calico-system,Attempt:0,}" Oct 13 00:00:53.114042 containerd[1533]: time="2025-10-13T00:00:53.113979497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6dnm,Uid:abb46d1a-66ca-43b6-970d-50f211aa1a45,Namespace:kube-system,Attempt:0,}" Oct 13 00:00:53.668825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776631406.mount: Deactivated successfully. Oct 13 00:00:53.676710 containerd[1533]: time="2025-10-13T00:00:53.676649137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:00:53.677933 containerd[1533]: time="2025-10-13T00:00:53.677892377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 13 00:00:53.678947 containerd[1533]: time="2025-10-13T00:00:53.678906977Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:00:53.680033 containerd[1533]: time="2025-10-13T00:00:53.679994057Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 00:00:53.681131 containerd[1533]: time="2025-10-13T00:00:53.680718897Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:00:53.682672 containerd[1533]: time="2025-10-13T00:00:53.682633497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:00:53.684251 containerd[1533]: time="2025-10-13T00:00:53.684197137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 566.47508ms" Oct 13 00:00:53.685533 containerd[1533]: time="2025-10-13T00:00:53.685315697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 564.94456ms" Oct 13 00:00:53.701162 containerd[1533]: time="2025-10-13T00:00:53.701106977Z" level=info msg="connecting to shim 8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38" address="unix:///run/containerd/s/d7815a32bd568b29f137ef8927e469235ccf9fd2b184f2cb7666582dfab1e558" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:00:53.703621 containerd[1533]: time="2025-10-13T00:00:53.703577497Z" level=info msg="connecting to shim cb517c9b95e7f20f2de8fa62ac5f2eb291470f9617d7764f6757cd933464e790" address="unix:///run/containerd/s/8411f98ae2efea9ab0ce848abd6ef8745ee1ff22150ae05044b3a0c58977e390" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:00:53.726045 systemd[1]: Started cri-containerd-8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38.scope - libcontainer container 8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38. Oct 13 00:00:53.729679 systemd[1]: Started cri-containerd-cb517c9b95e7f20f2de8fa62ac5f2eb291470f9617d7764f6757cd933464e790.scope - libcontainer container cb517c9b95e7f20f2de8fa62ac5f2eb291470f9617d7764f6757cd933464e790. Oct 13 00:00:53.750585 kubelet[1864]: E1013 00:00:53.750449 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:53.755441 containerd[1533]: time="2025-10-13T00:00:53.755387817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b5tbc,Uid:e037f5d9-3977-4041-a88c-f00a75e8e420,Namespace:calico-system,Attempt:0,} returns sandbox id \"8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38\"" Oct 13 00:00:53.758287 containerd[1533]: time="2025-10-13T00:00:53.758236537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 00:00:53.763604 containerd[1533]: time="2025-10-13T00:00:53.763565177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6dnm,Uid:abb46d1a-66ca-43b6-970d-50f211aa1a45,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb517c9b95e7f20f2de8fa62ac5f2eb291470f9617d7764f6757cd933464e790\"" Oct 13 00:00:53.826106 kubelet[1864]: E1013 00:00:53.826041 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zhm2" podUID="9a5371c7-64f8-48a1-bd35-b322196d1b79" Oct 13 00:00:54.751008 kubelet[1864]: E1013 00:00:54.750962 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:55.751563 kubelet[1864]: E1013 00:00:55.751496 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:55.826349 kubelet[1864]: E1013 00:00:55.826030 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zhm2" podUID="9a5371c7-64f8-48a1-bd35-b322196d1b79" Oct 13 00:00:56.152472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891465147.mount: Deactivated successfully. Oct 13 00:00:56.276136 containerd[1533]: time="2025-10-13T00:00:56.276082737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:56.276734 containerd[1533]: time="2025-10-13T00:00:56.276707657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Oct 13 00:00:56.277812 containerd[1533]: time="2025-10-13T00:00:56.277707737Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:56.279908 containerd[1533]: time="2025-10-13T00:00:56.279870697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:56.280456 containerd[1533]: time="2025-10-13T00:00:56.280419897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 2.52214032s" Oct 13 00:00:56.280532 containerd[1533]: time="2025-10-13T00:00:56.280456897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 00:00:56.281886 containerd[1533]: time="2025-10-13T00:00:56.281662777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 00:00:56.285244 containerd[1533]: time="2025-10-13T00:00:56.285194817Z" level=info msg="CreateContainer within sandbox \"8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 00:00:56.295507 containerd[1533]: time="2025-10-13T00:00:56.295466017Z" level=info msg="Container 8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:00:56.303211 containerd[1533]: time="2025-10-13T00:00:56.303101737Z" level=info msg="CreateContainer within sandbox \"8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898\"" Oct 13 00:00:56.304141 containerd[1533]: time="2025-10-13T00:00:56.304113777Z" level=info msg="StartContainer for \"8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898\"" Oct 13 00:00:56.305669 containerd[1533]: time="2025-10-13T00:00:56.305641577Z" level=info msg="connecting to shim 8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898" address="unix:///run/containerd/s/d7815a32bd568b29f137ef8927e469235ccf9fd2b184f2cb7666582dfab1e558" protocol=ttrpc version=3 Oct 13 00:00:56.331022 systemd[1]: Started cri-containerd-8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898.scope - libcontainer container 8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898. Oct 13 00:00:56.364405 containerd[1533]: time="2025-10-13T00:00:56.364365057Z" level=info msg="StartContainer for \"8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898\" returns successfully" Oct 13 00:00:56.374706 systemd[1]: cri-containerd-8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898.scope: Deactivated successfully. Oct 13 00:00:56.379161 containerd[1533]: time="2025-10-13T00:00:56.379065377Z" level=info msg="received exit event container_id:\"8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898\" id:\"8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898\" pid:2089 exited_at:{seconds:1760313656 nanos:378294217}" Oct 13 00:00:56.379260 containerd[1533]: time="2025-10-13T00:00:56.379119177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898\" id:\"8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898\" pid:2089 exited_at:{seconds:1760313656 nanos:378294217}" Oct 13 00:00:56.752565 kubelet[1864]: E1013 00:00:56.752517 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:57.128514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f47ba8f08cb658074f7e0eea15a86671a358ea3fcc9a8797f7dc55fdbb30898-rootfs.mount: Deactivated successfully. Oct 13 00:00:57.230734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount102362218.mount: Deactivated successfully. Oct 13 00:00:57.428697 containerd[1533]: time="2025-10-13T00:00:57.428444497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:57.429045 containerd[1533]: time="2025-10-13T00:00:57.428989697Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 13 00:00:57.430003 containerd[1533]: time="2025-10-13T00:00:57.429960897Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:57.432194 containerd[1533]: time="2025-10-13T00:00:57.432148097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:57.432831 containerd[1533]: time="2025-10-13T00:00:57.432761337Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.15106772s" Oct 13 00:00:57.432881 containerd[1533]: time="2025-10-13T00:00:57.432840177Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 13 00:00:57.434394 containerd[1533]: time="2025-10-13T00:00:57.434341497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 00:00:57.437180 containerd[1533]: time="2025-10-13T00:00:57.437138297Z" level=info msg="CreateContainer within sandbox \"cb517c9b95e7f20f2de8fa62ac5f2eb291470f9617d7764f6757cd933464e790\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 00:00:57.446870 containerd[1533]: time="2025-10-13T00:00:57.446820577Z" level=info msg="Container a7d4143688a23f389a953f61d52ad6989a4663d50367d399c9f15cbd11798f7b: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:00:57.455089 containerd[1533]: time="2025-10-13T00:00:57.455016497Z" level=info msg="CreateContainer within sandbox \"cb517c9b95e7f20f2de8fa62ac5f2eb291470f9617d7764f6757cd933464e790\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a7d4143688a23f389a953f61d52ad6989a4663d50367d399c9f15cbd11798f7b\"" Oct 13 00:00:57.455704 containerd[1533]: time="2025-10-13T00:00:57.455666017Z" level=info msg="StartContainer for \"a7d4143688a23f389a953f61d52ad6989a4663d50367d399c9f15cbd11798f7b\"" Oct 13 00:00:57.457121 containerd[1533]: time="2025-10-13T00:00:57.457090857Z" level=info msg="connecting to shim a7d4143688a23f389a953f61d52ad6989a4663d50367d399c9f15cbd11798f7b" address="unix:///run/containerd/s/8411f98ae2efea9ab0ce848abd6ef8745ee1ff22150ae05044b3a0c58977e390" protocol=ttrpc version=3 Oct 13 00:00:57.477998 systemd[1]: Started cri-containerd-a7d4143688a23f389a953f61d52ad6989a4663d50367d399c9f15cbd11798f7b.scope - libcontainer container a7d4143688a23f389a953f61d52ad6989a4663d50367d399c9f15cbd11798f7b. Oct 13 00:00:57.519017 containerd[1533]: time="2025-10-13T00:00:57.518978617Z" level=info msg="StartContainer for \"a7d4143688a23f389a953f61d52ad6989a4663d50367d399c9f15cbd11798f7b\" returns successfully" Oct 13 00:00:57.753011 kubelet[1864]: E1013 00:00:57.752900 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:57.828245 kubelet[1864]: E1013 00:00:57.828181 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zhm2" podUID="9a5371c7-64f8-48a1-bd35-b322196d1b79" Oct 13 00:00:57.849518 kubelet[1864]: I1013 00:00:57.849393 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v6dnm" podStartSLOduration=3.1805606969999998 podStartE2EDuration="6.849374737s" podCreationTimestamp="2025-10-13 00:00:51 +0000 UTC" firstStartedPulling="2025-10-13 00:00:53.764774297 +0000 UTC m=+3.068365241" lastFinishedPulling="2025-10-13 00:00:57.433588297 +0000 UTC m=+6.737179281" observedRunningTime="2025-10-13 00:00:57.849331017 +0000 UTC m=+7.152921961" watchObservedRunningTime="2025-10-13 00:00:57.849374737 +0000 UTC m=+7.152965681" Oct 13 00:00:58.753896 kubelet[1864]: E1013 00:00:58.753845 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:59.754279 kubelet[1864]: E1013 00:00:59.754234 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:00:59.826038 kubelet[1864]: E1013 00:00:59.825944 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zhm2" podUID="9a5371c7-64f8-48a1-bd35-b322196d1b79" Oct 13 00:01:00.740280 containerd[1533]: time="2025-10-13T00:01:00.740236177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:00.742969 containerd[1533]: time="2025-10-13T00:01:00.742891137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 00:01:00.743490 containerd[1533]: time="2025-10-13T00:01:00.743416817Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:00.746103 containerd[1533]: time="2025-10-13T00:01:00.745851657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:00.746511 containerd[1533]: time="2025-10-13T00:01:00.746482457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.31210216s" Oct 13 00:01:00.746583 containerd[1533]: time="2025-10-13T00:01:00.746568697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 00:01:00.751685 containerd[1533]: time="2025-10-13T00:01:00.751650857Z" level=info msg="CreateContainer within sandbox \"8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 00:01:00.754991 kubelet[1864]: E1013 00:01:00.754962 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:00.761089 containerd[1533]: time="2025-10-13T00:01:00.761045457Z" level=info msg="Container 0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:00.763508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833372271.mount: Deactivated successfully. Oct 13 00:01:00.768847 containerd[1533]: time="2025-10-13T00:01:00.768805777Z" level=info msg="CreateContainer within sandbox \"8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0\"" Oct 13 00:01:00.772772 containerd[1533]: time="2025-10-13T00:01:00.772716857Z" level=info msg="StartContainer for \"0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0\"" Oct 13 00:01:00.774179 containerd[1533]: time="2025-10-13T00:01:00.774151097Z" level=info msg="connecting to shim 0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0" address="unix:///run/containerd/s/d7815a32bd568b29f137ef8927e469235ccf9fd2b184f2cb7666582dfab1e558" protocol=ttrpc version=3 Oct 13 00:01:00.809031 systemd[1]: Started cri-containerd-0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0.scope - libcontainer container 0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0. Oct 13 00:01:00.850786 containerd[1533]: time="2025-10-13T00:01:00.850733857Z" level=info msg="StartContainer for \"0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0\" returns successfully" Oct 13 00:01:01.427263 containerd[1533]: time="2025-10-13T00:01:01.427213657Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:01:01.429256 systemd[1]: cri-containerd-0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0.scope: Deactivated successfully. Oct 13 00:01:01.429694 systemd[1]: cri-containerd-0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0.scope: Consumed 471ms CPU time, 188.1M memory peak, 165.8M written to disk. Oct 13 00:01:01.431369 containerd[1533]: time="2025-10-13T00:01:01.431339897Z" level=info msg="received exit event container_id:\"0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0\" id:\"0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0\" pid:2322 exited_at:{seconds:1760313661 nanos:431080537}" Oct 13 00:01:01.431439 containerd[1533]: time="2025-10-13T00:01:01.431366857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0\" id:\"0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0\" pid:2322 exited_at:{seconds:1760313661 nanos:431080537}" Oct 13 00:01:01.440989 kubelet[1864]: I1013 00:01:01.440956 1864 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 00:01:01.451237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0070c66648f44c5c3cf745c53b54d93136376e110a12ed6c3ccd8c664c11dce0-rootfs.mount: Deactivated successfully. Oct 13 00:01:01.589454 systemd[1]: Created slice kubepods-besteffort-pod8a807673_1d24_4905_967e_6ee4665f380d.slice - libcontainer container kubepods-besteffort-pod8a807673_1d24_4905_967e_6ee4665f380d.slice. Oct 13 00:01:01.597436 systemd[1]: Created slice kubepods-burstable-pod5066ee12_3d1a_4745_bdf2_f00011135a06.slice - libcontainer container kubepods-burstable-pod5066ee12_3d1a_4745_bdf2_f00011135a06.slice. Oct 13 00:01:01.631245 systemd[1]: Created slice kubepods-besteffort-podc0133bef_7fb4_4183_a390_03737bc1326e.slice - libcontainer container kubepods-besteffort-podc0133bef_7fb4_4183_a390_03737bc1326e.slice. Oct 13 00:01:01.635026 systemd[1]: Created slice kubepods-besteffort-podabe04e5d_16c5_445a_94e4_1f8f8f789715.slice - libcontainer container kubepods-besteffort-podabe04e5d_16c5_445a_94e4_1f8f8f789715.slice. Oct 13 00:01:01.641900 systemd[1]: Created slice kubepods-besteffort-pod2c33bcce_e6aa_438f_81c0_a1243e0458a8.slice - libcontainer container kubepods-besteffort-pod2c33bcce_e6aa_438f_81c0_a1243e0458a8.slice. Oct 13 00:01:01.644522 kubelet[1864]: I1013 00:01:01.644473 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-ca-bundle\") pod \"whisker-bd4c9b848-wxwtn\" (UID: \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\") " pod="calico-system/whisker-bd4c9b848-wxwtn" Oct 13 00:01:01.644522 kubelet[1864]: I1013 00:01:01.644522 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a807673-1d24-4905-967e-6ee4665f380d-tigera-ca-bundle\") pod \"calico-kube-controllers-86f478b884-ghpv9\" (UID: \"8a807673-1d24-4905-967e-6ee4665f380d\") " pod="calico-system/calico-kube-controllers-86f478b884-ghpv9" Oct 13 00:01:01.644674 kubelet[1864]: I1013 00:01:01.644602 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c33bcce-e6aa-438f-81c0-a1243e0458a8-config\") pod \"goldmane-854f97d977-9nphb\" (UID: \"2c33bcce-e6aa-438f-81c0-a1243e0458a8\") " pod="calico-system/goldmane-854f97d977-9nphb" Oct 13 00:01:01.644674 kubelet[1864]: I1013 00:01:01.644635 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9mk\" (UniqueName: \"kubernetes.io/projected/2c33bcce-e6aa-438f-81c0-a1243e0458a8-kube-api-access-tt9mk\") pod \"goldmane-854f97d977-9nphb\" (UID: \"2c33bcce-e6aa-438f-81c0-a1243e0458a8\") " pod="calico-system/goldmane-854f97d977-9nphb" Oct 13 00:01:01.644674 kubelet[1864]: I1013 00:01:01.644656 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jp5n\" (UniqueName: \"kubernetes.io/projected/abe04e5d-16c5-445a-94e4-1f8f8f789715-kube-api-access-4jp5n\") pod \"calico-apiserver-5b686ccc97-q9cxb\" (UID: \"abe04e5d-16c5-445a-94e4-1f8f8f789715\") " pod="calico-apiserver/calico-apiserver-5b686ccc97-q9cxb" Oct 13 00:01:01.644738 kubelet[1864]: I1013 00:01:01.644682 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cczc9\" (UniqueName: \"kubernetes.io/projected/ff46f70d-6961-4e41-b75b-3a39b35c2c50-kube-api-access-cczc9\") pod \"coredns-66bc5c9577-phvtw\" (UID: \"ff46f70d-6961-4e41-b75b-3a39b35c2c50\") " pod="kube-system/coredns-66bc5c9577-phvtw" Oct 13 00:01:01.644738 kubelet[1864]: I1013 00:01:01.644699 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5066ee12-3d1a-4745-bdf2-f00011135a06-config-volume\") pod \"coredns-66bc5c9577-pzdj6\" (UID: \"5066ee12-3d1a-4745-bdf2-f00011135a06\") " pod="kube-system/coredns-66bc5c9577-pzdj6" Oct 13 00:01:01.644738 kubelet[1864]: I1013 00:01:01.644714 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2c33bcce-e6aa-438f-81c0-a1243e0458a8-goldmane-key-pair\") pod \"goldmane-854f97d977-9nphb\" (UID: \"2c33bcce-e6aa-438f-81c0-a1243e0458a8\") " pod="calico-system/goldmane-854f97d977-9nphb" Oct 13 00:01:01.644738 kubelet[1864]: I1013 00:01:01.644731 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-backend-key-pair\") pod \"whisker-bd4c9b848-wxwtn\" (UID: \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\") " pod="calico-system/whisker-bd4c9b848-wxwtn" Oct 13 00:01:01.644858 kubelet[1864]: I1013 00:01:01.644756 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p9mq\" (UniqueName: \"kubernetes.io/projected/06a79750-ad5b-4923-88da-893494a6c052-kube-api-access-9p9mq\") pod \"calico-apiserver-765759bb74-868z9\" (UID: \"06a79750-ad5b-4923-88da-893494a6c052\") " pod="calico-apiserver/calico-apiserver-765759bb74-868z9" Oct 13 00:01:01.644858 kubelet[1864]: I1013 00:01:01.644774 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06a79750-ad5b-4923-88da-893494a6c052-calico-apiserver-certs\") pod \"calico-apiserver-765759bb74-868z9\" (UID: \"06a79750-ad5b-4923-88da-893494a6c052\") " pod="calico-apiserver/calico-apiserver-765759bb74-868z9" Oct 13 00:01:01.644858 kubelet[1864]: I1013 00:01:01.644806 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br55m\" (UniqueName: \"kubernetes.io/projected/8a807673-1d24-4905-967e-6ee4665f380d-kube-api-access-br55m\") pod \"calico-kube-controllers-86f478b884-ghpv9\" (UID: \"8a807673-1d24-4905-967e-6ee4665f380d\") " pod="calico-system/calico-kube-controllers-86f478b884-ghpv9" Oct 13 00:01:01.644858 kubelet[1864]: I1013 00:01:01.644825 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kksrd\" (UniqueName: \"kubernetes.io/projected/5066ee12-3d1a-4745-bdf2-f00011135a06-kube-api-access-kksrd\") pod \"coredns-66bc5c9577-pzdj6\" (UID: \"5066ee12-3d1a-4745-bdf2-f00011135a06\") " pod="kube-system/coredns-66bc5c9577-pzdj6" Oct 13 00:01:01.644858 kubelet[1864]: I1013 00:01:01.644849 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0133bef-7fb4-4183-a390-03737bc1326e-calico-apiserver-certs\") pod \"calico-apiserver-5b686ccc97-gg2z6\" (UID: \"c0133bef-7fb4-4183-a390-03737bc1326e\") " pod="calico-apiserver/calico-apiserver-5b686ccc97-gg2z6" Oct 13 00:01:01.644962 kubelet[1864]: I1013 00:01:01.644871 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c33bcce-e6aa-438f-81c0-a1243e0458a8-goldmane-ca-bundle\") pod \"goldmane-854f97d977-9nphb\" (UID: \"2c33bcce-e6aa-438f-81c0-a1243e0458a8\") " pod="calico-system/goldmane-854f97d977-9nphb" Oct 13 00:01:01.644962 kubelet[1864]: I1013 00:01:01.644896 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqg82\" (UniqueName: \"kubernetes.io/projected/52988a1f-3347-4dbf-9cc8-0e7e030022e3-kube-api-access-qqg82\") pod \"whisker-bd4c9b848-wxwtn\" (UID: \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\") " pod="calico-system/whisker-bd4c9b848-wxwtn" Oct 13 00:01:01.644962 kubelet[1864]: I1013 00:01:01.644913 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abe04e5d-16c5-445a-94e4-1f8f8f789715-calico-apiserver-certs\") pod \"calico-apiserver-5b686ccc97-q9cxb\" (UID: \"abe04e5d-16c5-445a-94e4-1f8f8f789715\") " pod="calico-apiserver/calico-apiserver-5b686ccc97-q9cxb" Oct 13 00:01:01.644962 kubelet[1864]: I1013 00:01:01.644928 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncxwf\" (UniqueName: \"kubernetes.io/projected/c0133bef-7fb4-4183-a390-03737bc1326e-kube-api-access-ncxwf\") pod \"calico-apiserver-5b686ccc97-gg2z6\" (UID: \"c0133bef-7fb4-4183-a390-03737bc1326e\") " pod="calico-apiserver/calico-apiserver-5b686ccc97-gg2z6" Oct 13 00:01:01.644962 kubelet[1864]: I1013 00:01:01.644946 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff46f70d-6961-4e41-b75b-3a39b35c2c50-config-volume\") pod \"coredns-66bc5c9577-phvtw\" (UID: \"ff46f70d-6961-4e41-b75b-3a39b35c2c50\") " pod="kube-system/coredns-66bc5c9577-phvtw" Oct 13 00:01:01.646575 systemd[1]: Created slice kubepods-besteffort-pod52988a1f_3347_4dbf_9cc8_0e7e030022e3.slice - libcontainer container kubepods-besteffort-pod52988a1f_3347_4dbf_9cc8_0e7e030022e3.slice. Oct 13 00:01:01.651222 systemd[1]: Created slice kubepods-burstable-podff46f70d_6961_4e41_b75b_3a39b35c2c50.slice - libcontainer container kubepods-burstable-podff46f70d_6961_4e41_b75b_3a39b35c2c50.slice. Oct 13 00:01:01.671142 systemd[1]: Created slice kubepods-besteffort-pod06a79750_ad5b_4923_88da_893494a6c052.slice - libcontainer container kubepods-besteffort-pod06a79750_ad5b_4923_88da_893494a6c052.slice. Oct 13 00:01:01.755645 kubelet[1864]: E1013 00:01:01.755367 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:01.834193 systemd[1]: Created slice kubepods-besteffort-pod9a5371c7_64f8_48a1_bd35_b322196d1b79.slice - libcontainer container kubepods-besteffort-pod9a5371c7_64f8_48a1_bd35_b322196d1b79.slice. Oct 13 00:01:01.839423 containerd[1533]: time="2025-10-13T00:01:01.839162057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zhm2,Uid:9a5371c7-64f8-48a1-bd35-b322196d1b79,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:01.855347 containerd[1533]: time="2025-10-13T00:01:01.855303337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 00:01:01.896084 containerd[1533]: time="2025-10-13T00:01:01.896046017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f478b884-ghpv9,Uid:8a807673-1d24-4905-967e-6ee4665f380d,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:01.915080 containerd[1533]: time="2025-10-13T00:01:01.915018817Z" level=error msg="Failed to destroy network for sandbox \"a40117461c705798d781a7cba8a1794377392b470db0b72e8bafd6b310ac25d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:01.916366 containerd[1533]: time="2025-10-13T00:01:01.916324497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zhm2,Uid:9a5371c7-64f8-48a1-bd35-b322196d1b79,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40117461c705798d781a7cba8a1794377392b470db0b72e8bafd6b310ac25d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:01.916691 kubelet[1864]: E1013 00:01:01.916564 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40117461c705798d781a7cba8a1794377392b470db0b72e8bafd6b310ac25d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:01.916856 kubelet[1864]: E1013 00:01:01.916828 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40117461c705798d781a7cba8a1794377392b470db0b72e8bafd6b310ac25d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8zhm2" Oct 13 00:01:01.916928 kubelet[1864]: E1013 00:01:01.916912 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40117461c705798d781a7cba8a1794377392b470db0b72e8bafd6b310ac25d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8zhm2" Oct 13 00:01:01.917053 kubelet[1864]: E1013 00:01:01.917026 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8zhm2_calico-system(9a5371c7-64f8-48a1-bd35-b322196d1b79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8zhm2_calico-system(9a5371c7-64f8-48a1-bd35-b322196d1b79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a40117461c705798d781a7cba8a1794377392b470db0b72e8bafd6b310ac25d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8zhm2" podUID="9a5371c7-64f8-48a1-bd35-b322196d1b79" Oct 13 00:01:01.929026 containerd[1533]: time="2025-10-13T00:01:01.928949297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pzdj6,Uid:5066ee12-3d1a-4745-bdf2-f00011135a06,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:01.937929 containerd[1533]: time="2025-10-13T00:01:01.937887057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-gg2z6,Uid:c0133bef-7fb4-4183-a390-03737bc1326e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:01.939826 containerd[1533]: time="2025-10-13T00:01:01.939783297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-q9cxb,Uid:abe04e5d-16c5-445a-94e4-1f8f8f789715,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:01.947598 containerd[1533]: time="2025-10-13T00:01:01.947543817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-9nphb,Uid:2c33bcce-e6aa-438f-81c0-a1243e0458a8,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:01.954813 containerd[1533]: time="2025-10-13T00:01:01.953459657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd4c9b848-wxwtn,Uid:52988a1f-3347-4dbf-9cc8-0e7e030022e3,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:01.970490 containerd[1533]: time="2025-10-13T00:01:01.970439657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phvtw,Uid:ff46f70d-6961-4e41-b75b-3a39b35c2c50,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:01.973313 containerd[1533]: time="2025-10-13T00:01:01.973262097Z" level=error msg="Failed to destroy network for sandbox \"a97c874b093e658a632c4e8b9ca2073b2dab2e23d294c438f337de4ff4dedf11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:01.977748 containerd[1533]: time="2025-10-13T00:01:01.977704377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765759bb74-868z9,Uid:06a79750-ad5b-4923-88da-893494a6c052,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:01.981853 containerd[1533]: time="2025-10-13T00:01:01.981767457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f478b884-ghpv9,Uid:8a807673-1d24-4905-967e-6ee4665f380d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97c874b093e658a632c4e8b9ca2073b2dab2e23d294c438f337de4ff4dedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:01.986363 kubelet[1864]: E1013 00:01:01.985911 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97c874b093e658a632c4e8b9ca2073b2dab2e23d294c438f337de4ff4dedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:01.986363 kubelet[1864]: E1013 00:01:01.985981 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97c874b093e658a632c4e8b9ca2073b2dab2e23d294c438f337de4ff4dedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f478b884-ghpv9" Oct 13 00:01:01.986363 kubelet[1864]: E1013 00:01:01.986003 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97c874b093e658a632c4e8b9ca2073b2dab2e23d294c438f337de4ff4dedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f478b884-ghpv9" Oct 13 00:01:01.986545 kubelet[1864]: E1013 00:01:01.986062 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f478b884-ghpv9_calico-system(8a807673-1d24-4905-967e-6ee4665f380d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f478b884-ghpv9_calico-system(8a807673-1d24-4905-967e-6ee4665f380d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a97c874b093e658a632c4e8b9ca2073b2dab2e23d294c438f337de4ff4dedf11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f478b884-ghpv9" podUID="8a807673-1d24-4905-967e-6ee4665f380d" Oct 13 00:01:02.038643 containerd[1533]: time="2025-10-13T00:01:02.038492457Z" level=error msg="Failed to destroy network for sandbox \"c632300f76d223361c5b18f20a235a9d99433ee185c568386db3f8361a673a9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.041704 containerd[1533]: time="2025-10-13T00:01:02.041647857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-q9cxb,Uid:abe04e5d-16c5-445a-94e4-1f8f8f789715,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c632300f76d223361c5b18f20a235a9d99433ee185c568386db3f8361a673a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.042834 kubelet[1864]: E1013 00:01:02.042621 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c632300f76d223361c5b18f20a235a9d99433ee185c568386db3f8361a673a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.042834 kubelet[1864]: E1013 00:01:02.042699 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c632300f76d223361c5b18f20a235a9d99433ee185c568386db3f8361a673a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b686ccc97-q9cxb" Oct 13 00:01:02.042834 kubelet[1864]: E1013 00:01:02.042720 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c632300f76d223361c5b18f20a235a9d99433ee185c568386db3f8361a673a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b686ccc97-q9cxb" Oct 13 00:01:02.042992 kubelet[1864]: E1013 00:01:02.042778 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b686ccc97-q9cxb_calico-apiserver(abe04e5d-16c5-445a-94e4-1f8f8f789715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b686ccc97-q9cxb_calico-apiserver(abe04e5d-16c5-445a-94e4-1f8f8f789715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c632300f76d223361c5b18f20a235a9d99433ee185c568386db3f8361a673a9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b686ccc97-q9cxb" podUID="abe04e5d-16c5-445a-94e4-1f8f8f789715" Oct 13 00:01:02.046950 containerd[1533]: time="2025-10-13T00:01:02.046897857Z" level=error msg="Failed to destroy network for sandbox \"6d7c094bee660072e1041a3ee71c57ab99a7073a2408cb18b44fe9435eefb61d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.048580 containerd[1533]: time="2025-10-13T00:01:02.048525137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pzdj6,Uid:5066ee12-3d1a-4745-bdf2-f00011135a06,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7c094bee660072e1041a3ee71c57ab99a7073a2408cb18b44fe9435eefb61d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.048871 kubelet[1864]: E1013 00:01:02.048742 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7c094bee660072e1041a3ee71c57ab99a7073a2408cb18b44fe9435eefb61d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.048871 kubelet[1864]: E1013 00:01:02.048831 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7c094bee660072e1041a3ee71c57ab99a7073a2408cb18b44fe9435eefb61d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pzdj6" Oct 13 00:01:02.048871 kubelet[1864]: E1013 00:01:02.048865 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7c094bee660072e1041a3ee71c57ab99a7073a2408cb18b44fe9435eefb61d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pzdj6" Oct 13 00:01:02.049043 kubelet[1864]: E1013 00:01:02.049012 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pzdj6_kube-system(5066ee12-3d1a-4745-bdf2-f00011135a06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pzdj6_kube-system(5066ee12-3d1a-4745-bdf2-f00011135a06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d7c094bee660072e1041a3ee71c57ab99a7073a2408cb18b44fe9435eefb61d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pzdj6" podUID="5066ee12-3d1a-4745-bdf2-f00011135a06" Oct 13 00:01:02.050751 containerd[1533]: time="2025-10-13T00:01:02.050698977Z" level=error msg="Failed to destroy network for sandbox \"634b140d19e0badc7bb0a567b0c93b51bafea1e7b351d9af48d0d6c77914c69a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.052808 containerd[1533]: time="2025-10-13T00:01:02.052589257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-gg2z6,Uid:c0133bef-7fb4-4183-a390-03737bc1326e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b140d19e0badc7bb0a567b0c93b51bafea1e7b351d9af48d0d6c77914c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.053044 kubelet[1864]: E1013 00:01:02.053000 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b140d19e0badc7bb0a567b0c93b51bafea1e7b351d9af48d0d6c77914c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.053131 kubelet[1864]: E1013 00:01:02.053063 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b140d19e0badc7bb0a567b0c93b51bafea1e7b351d9af48d0d6c77914c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b686ccc97-gg2z6" Oct 13 00:01:02.053131 kubelet[1864]: E1013 00:01:02.053081 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b140d19e0badc7bb0a567b0c93b51bafea1e7b351d9af48d0d6c77914c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b686ccc97-gg2z6" Oct 13 00:01:02.053175 kubelet[1864]: E1013 00:01:02.053130 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b686ccc97-gg2z6_calico-apiserver(c0133bef-7fb4-4183-a390-03737bc1326e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b686ccc97-gg2z6_calico-apiserver(c0133bef-7fb4-4183-a390-03737bc1326e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"634b140d19e0badc7bb0a567b0c93b51bafea1e7b351d9af48d0d6c77914c69a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b686ccc97-gg2z6" podUID="c0133bef-7fb4-4183-a390-03737bc1326e" Oct 13 00:01:02.058987 containerd[1533]: time="2025-10-13T00:01:02.058934017Z" level=error msg="Failed to destroy network for sandbox \"2ec19ae54e441f2c4f9c9ba484b3222eb6d7b2f3f67746f7a6df24d16850a759\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.059218 containerd[1533]: time="2025-10-13T00:01:02.059187177Z" level=error msg="Failed to destroy network for sandbox \"4ba83594a60e43a1edef2c01271d19c638cac6fc5e7b53a3db8bac6b4cfc37e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.059866 containerd[1533]: time="2025-10-13T00:01:02.059825697Z" level=error msg="Failed to destroy network for sandbox \"8c72679af89ccb90f538f9416a0dc778dbb29502d47cfd72810d9ed221b2d470\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.059980 containerd[1533]: time="2025-10-13T00:01:02.059951417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phvtw,Uid:ff46f70d-6961-4e41-b75b-3a39b35c2c50,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec19ae54e441f2c4f9c9ba484b3222eb6d7b2f3f67746f7a6df24d16850a759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.060165 kubelet[1864]: E1013 00:01:02.060130 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec19ae54e441f2c4f9c9ba484b3222eb6d7b2f3f67746f7a6df24d16850a759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.060220 kubelet[1864]: E1013 00:01:02.060180 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec19ae54e441f2c4f9c9ba484b3222eb6d7b2f3f67746f7a6df24d16850a759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-phvtw" Oct 13 00:01:02.060220 kubelet[1864]: E1013 00:01:02.060199 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec19ae54e441f2c4f9c9ba484b3222eb6d7b2f3f67746f7a6df24d16850a759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-phvtw" Oct 13 00:01:02.060274 kubelet[1864]: E1013 00:01:02.060252 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-phvtw_kube-system(ff46f70d-6961-4e41-b75b-3a39b35c2c50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-phvtw_kube-system(ff46f70d-6961-4e41-b75b-3a39b35c2c50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ec19ae54e441f2c4f9c9ba484b3222eb6d7b2f3f67746f7a6df24d16850a759\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-phvtw" podUID="ff46f70d-6961-4e41-b75b-3a39b35c2c50" Oct 13 00:01:02.061094 containerd[1533]: time="2025-10-13T00:01:02.061051337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd4c9b848-wxwtn,Uid:52988a1f-3347-4dbf-9cc8-0e7e030022e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba83594a60e43a1edef2c01271d19c638cac6fc5e7b53a3db8bac6b4cfc37e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.061379 kubelet[1864]: E1013 00:01:02.061341 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba83594a60e43a1edef2c01271d19c638cac6fc5e7b53a3db8bac6b4cfc37e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.061434 kubelet[1864]: E1013 00:01:02.061395 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba83594a60e43a1edef2c01271d19c638cac6fc5e7b53a3db8bac6b4cfc37e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd4c9b848-wxwtn" Oct 13 00:01:02.061434 kubelet[1864]: E1013 00:01:02.061412 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba83594a60e43a1edef2c01271d19c638cac6fc5e7b53a3db8bac6b4cfc37e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd4c9b848-wxwtn" Oct 13 00:01:02.061525 kubelet[1864]: E1013 00:01:02.061467 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd4c9b848-wxwtn_calico-system(52988a1f-3347-4dbf-9cc8-0e7e030022e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd4c9b848-wxwtn_calico-system(52988a1f-3347-4dbf-9cc8-0e7e030022e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ba83594a60e43a1edef2c01271d19c638cac6fc5e7b53a3db8bac6b4cfc37e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd4c9b848-wxwtn" podUID="52988a1f-3347-4dbf-9cc8-0e7e030022e3" Oct 13 00:01:02.061880 containerd[1533]: time="2025-10-13T00:01:02.061832697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-9nphb,Uid:2c33bcce-e6aa-438f-81c0-a1243e0458a8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c72679af89ccb90f538f9416a0dc778dbb29502d47cfd72810d9ed221b2d470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.062139 kubelet[1864]: E1013 00:01:02.061996 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c72679af89ccb90f538f9416a0dc778dbb29502d47cfd72810d9ed221b2d470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.062139 kubelet[1864]: E1013 00:01:02.062033 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c72679af89ccb90f538f9416a0dc778dbb29502d47cfd72810d9ed221b2d470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-9nphb" Oct 13 00:01:02.062139 kubelet[1864]: E1013 00:01:02.062050 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c72679af89ccb90f538f9416a0dc778dbb29502d47cfd72810d9ed221b2d470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-9nphb" Oct 13 00:01:02.062236 kubelet[1864]: E1013 00:01:02.062090 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-9nphb_calico-system(2c33bcce-e6aa-438f-81c0-a1243e0458a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-9nphb_calico-system(2c33bcce-e6aa-438f-81c0-a1243e0458a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c72679af89ccb90f538f9416a0dc778dbb29502d47cfd72810d9ed221b2d470\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-9nphb" podUID="2c33bcce-e6aa-438f-81c0-a1243e0458a8" Oct 13 00:01:02.075506 containerd[1533]: time="2025-10-13T00:01:02.075441457Z" level=error msg="Failed to destroy network for sandbox \"ec464531d8d950b63ddcaceb76adb1c4f34e7b2c70eaa6fc7a731bcf5fdbf433\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.076615 containerd[1533]: time="2025-10-13T00:01:02.076581937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765759bb74-868z9,Uid:06a79750-ad5b-4923-88da-893494a6c052,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec464531d8d950b63ddcaceb76adb1c4f34e7b2c70eaa6fc7a731bcf5fdbf433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.076926 kubelet[1864]: E1013 00:01:02.076785 1864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec464531d8d950b63ddcaceb76adb1c4f34e7b2c70eaa6fc7a731bcf5fdbf433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:02.077024 kubelet[1864]: E1013 00:01:02.077009 1864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec464531d8d950b63ddcaceb76adb1c4f34e7b2c70eaa6fc7a731bcf5fdbf433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765759bb74-868z9" Oct 13 00:01:02.077088 kubelet[1864]: E1013 00:01:02.077075 1864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec464531d8d950b63ddcaceb76adb1c4f34e7b2c70eaa6fc7a731bcf5fdbf433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765759bb74-868z9" Oct 13 00:01:02.077202 kubelet[1864]: E1013 00:01:02.077179 1864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-765759bb74-868z9_calico-apiserver(06a79750-ad5b-4923-88da-893494a6c052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-765759bb74-868z9_calico-apiserver(06a79750-ad5b-4923-88da-893494a6c052)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec464531d8d950b63ddcaceb76adb1c4f34e7b2c70eaa6fc7a731bcf5fdbf433\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-765759bb74-868z9" podUID="06a79750-ad5b-4923-88da-893494a6c052" Oct 13 00:01:02.756309 kubelet[1864]: E1013 00:01:02.756250 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:02.761306 systemd[1]: run-netns-cni\x2d0098a753\x2d924a\x2d1617\x2d33d3\x2d5e237dd459e3.mount: Deactivated successfully. Oct 13 00:01:02.761398 systemd[1]: run-netns-cni\x2dbfc8469b\x2ddc0f\x2d7aeb\x2d53fb\x2d79cd539d118b.mount: Deactivated successfully. Oct 13 00:01:03.757072 kubelet[1864]: E1013 00:01:03.757015 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:04.757817 kubelet[1864]: E1013 00:01:04.757742 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:05.758616 kubelet[1864]: E1013 00:01:05.758572 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:06.758958 kubelet[1864]: E1013 00:01:06.758771 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:07.741121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240233900.mount: Deactivated successfully. Oct 13 00:01:07.759148 kubelet[1864]: E1013 00:01:07.759111 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:07.981379 containerd[1533]: time="2025-10-13T00:01:07.981325937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:07.982282 containerd[1533]: time="2025-10-13T00:01:07.982081297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 00:01:07.983046 containerd[1533]: time="2025-10-13T00:01:07.983018657Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:07.985061 containerd[1533]: time="2025-10-13T00:01:07.985024177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:07.986152 containerd[1533]: time="2025-10-13T00:01:07.986034457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.13068844s" Oct 13 00:01:07.986152 containerd[1533]: time="2025-10-13T00:01:07.986065337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 00:01:07.996756 containerd[1533]: time="2025-10-13T00:01:07.996333017Z" level=info msg="CreateContainer within sandbox \"8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 00:01:08.004646 containerd[1533]: time="2025-10-13T00:01:08.004613817Z" level=info msg="Container 9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:08.012860 containerd[1533]: time="2025-10-13T00:01:08.012814177Z" level=info msg="CreateContainer within sandbox \"8aef25347b9d579dfdbe63c3ef3a4e8e8cb849d14f318acc249b5bfec9d6ed38\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768\"" Oct 13 00:01:08.013469 containerd[1533]: time="2025-10-13T00:01:08.013443977Z" level=info msg="StartContainer for \"9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768\"" Oct 13 00:01:08.014990 containerd[1533]: time="2025-10-13T00:01:08.014958897Z" level=info msg="connecting to shim 9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768" address="unix:///run/containerd/s/d7815a32bd568b29f137ef8927e469235ccf9fd2b184f2cb7666582dfab1e558" protocol=ttrpc version=3 Oct 13 00:01:08.037977 systemd[1]: Started cri-containerd-9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768.scope - libcontainer container 9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768. Oct 13 00:01:08.073401 containerd[1533]: time="2025-10-13T00:01:08.073342497Z" level=info msg="StartContainer for \"9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768\" returns successfully" Oct 13 00:01:08.186845 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 00:01:08.186946 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 00:01:08.382336 kubelet[1864]: I1013 00:01:08.381986 1864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqg82\" (UniqueName: \"kubernetes.io/projected/52988a1f-3347-4dbf-9cc8-0e7e030022e3-kube-api-access-qqg82\") pod \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\" (UID: \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\") " Oct 13 00:01:08.382336 kubelet[1864]: I1013 00:01:08.382038 1864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-backend-key-pair\") pod \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\" (UID: \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\") " Oct 13 00:01:08.382336 kubelet[1864]: I1013 00:01:08.382059 1864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-ca-bundle\") pod \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\" (UID: \"52988a1f-3347-4dbf-9cc8-0e7e030022e3\") " Oct 13 00:01:08.382674 kubelet[1864]: I1013 00:01:08.382589 1864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "52988a1f-3347-4dbf-9cc8-0e7e030022e3" (UID: "52988a1f-3347-4dbf-9cc8-0e7e030022e3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:01:08.384619 kubelet[1864]: I1013 00:01:08.384540 1864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52988a1f-3347-4dbf-9cc8-0e7e030022e3-kube-api-access-qqg82" (OuterVolumeSpecName: "kube-api-access-qqg82") pod "52988a1f-3347-4dbf-9cc8-0e7e030022e3" (UID: "52988a1f-3347-4dbf-9cc8-0e7e030022e3"). InnerVolumeSpecName "kube-api-access-qqg82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:01:08.384959 kubelet[1864]: I1013 00:01:08.384919 1864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "52988a1f-3347-4dbf-9cc8-0e7e030022e3" (UID: "52988a1f-3347-4dbf-9cc8-0e7e030022e3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:01:08.482438 kubelet[1864]: I1013 00:01:08.482368 1864 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-backend-key-pair\") on node \"10.0.0.51\" DevicePath \"\"" Oct 13 00:01:08.482438 kubelet[1864]: I1013 00:01:08.482410 1864 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52988a1f-3347-4dbf-9cc8-0e7e030022e3-whisker-ca-bundle\") on node \"10.0.0.51\" DevicePath \"\"" Oct 13 00:01:08.482438 kubelet[1864]: I1013 00:01:08.482434 1864 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqg82\" (UniqueName: \"kubernetes.io/projected/52988a1f-3347-4dbf-9cc8-0e7e030022e3-kube-api-access-qqg82\") on node \"10.0.0.51\" DevicePath \"\"" Oct 13 00:01:08.500994 systemd[1]: Created slice kubepods-besteffort-pod97c7db48_348c_4ea9_b189_7f14219279eb.slice - libcontainer container kubepods-besteffort-pod97c7db48_348c_4ea9_b189_7f14219279eb.slice. Oct 13 00:01:08.582782 kubelet[1864]: I1013 00:01:08.582731 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tlb\" (UniqueName: \"kubernetes.io/projected/97c7db48-348c-4ea9-b189-7f14219279eb-kube-api-access-b5tlb\") pod \"nginx-deployment-bb8f74bfb-cz8t5\" (UID: \"97c7db48-348c-4ea9-b189-7f14219279eb\") " pod="default/nginx-deployment-bb8f74bfb-cz8t5" Oct 13 00:01:08.742197 systemd[1]: var-lib-kubelet-pods-52988a1f\x2d3347\x2d4dbf\x2d9cc8\x2d0e7e030022e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqqg82.mount: Deactivated successfully. Oct 13 00:01:08.742310 systemd[1]: var-lib-kubelet-pods-52988a1f\x2d3347\x2d4dbf\x2d9cc8\x2d0e7e030022e3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 00:01:08.759893 kubelet[1864]: E1013 00:01:08.759850 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:08.806710 containerd[1533]: time="2025-10-13T00:01:08.806664817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-cz8t5,Uid:97c7db48-348c-4ea9-b189-7f14219279eb,Namespace:default,Attempt:0,}" Oct 13 00:01:08.880553 systemd[1]: Removed slice kubepods-besteffort-pod52988a1f_3347_4dbf_9cc8_0e7e030022e3.slice - libcontainer container kubepods-besteffort-pod52988a1f_3347_4dbf_9cc8_0e7e030022e3.slice. Oct 13 00:01:08.900696 kubelet[1864]: I1013 00:01:08.900631 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b5tbc" podStartSLOduration=3.671544697 podStartE2EDuration="17.900612617s" podCreationTimestamp="2025-10-13 00:00:51 +0000 UTC" firstStartedPulling="2025-10-13 00:00:53.757589617 +0000 UTC m=+3.061180561" lastFinishedPulling="2025-10-13 00:01:07.986657577 +0000 UTC m=+17.290248481" observedRunningTime="2025-10-13 00:01:08.890644617 +0000 UTC m=+18.194235681" watchObservedRunningTime="2025-10-13 00:01:08.900612617 +0000 UTC m=+18.204203561" Oct 13 00:01:08.934320 systemd-networkd[1435]: calibbb0822ee1d: Link UP Oct 13 00:01:08.934720 systemd-networkd[1435]: calibbb0822ee1d: Gained carrier Oct 13 00:01:08.948048 containerd[1533]: 2025-10-13 00:01:08.826 [INFO][2739] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:01:08.948048 containerd[1533]: 2025-10-13 00:01:08.844 [INFO][2739] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0 nginx-deployment-bb8f74bfb- default 97c7db48-348c-4ea9-b189-7f14219279eb 1044 0 2025-10-13 00:01:08 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.51 nginx-deployment-bb8f74bfb-cz8t5 eth0 default [] [] [kns.default ksa.default.default] calibbb0822ee1d [] [] }} ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-" Oct 13 00:01:08.948048 containerd[1533]: 2025-10-13 00:01:08.844 [INFO][2739] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" Oct 13 00:01:08.948048 containerd[1533]: 2025-10-13 00:01:08.884 [INFO][2754] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" HandleID="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Workload="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.884 [INFO][2754] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" HandleID="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Workload="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b4050), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.51", "pod":"nginx-deployment-bb8f74bfb-cz8t5", "timestamp":"2025-10-13 00:01:08.884721977 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.884 [INFO][2754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.885 [INFO][2754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.885 [INFO][2754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.896 [INFO][2754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" host="10.0.0.51" Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.902 [INFO][2754] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.908 [INFO][2754] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.910 [INFO][2754] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.913 [INFO][2754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:08.948396 containerd[1533]: 2025-10-13 00:01:08.913 [INFO][2754] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" host="10.0.0.51" Oct 13 00:01:08.948632 containerd[1533]: 2025-10-13 00:01:08.915 [INFO][2754] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e Oct 13 00:01:08.948632 containerd[1533]: 2025-10-13 00:01:08.920 [INFO][2754] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" host="10.0.0.51" Oct 13 00:01:08.948632 containerd[1533]: 2025-10-13 00:01:08.926 [INFO][2754] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.193/26] block=192.168.109.192/26 handle="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" host="10.0.0.51" Oct 13 00:01:08.948632 containerd[1533]: 2025-10-13 00:01:08.926 [INFO][2754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.193/26] handle="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" host="10.0.0.51" Oct 13 00:01:08.948632 containerd[1533]: 2025-10-13 00:01:08.926 [INFO][2754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:08.948632 containerd[1533]: 2025-10-13 00:01:08.926 [INFO][2754] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.193/26] IPv6=[] ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" HandleID="k8s-pod-network.f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Workload="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" Oct 13 00:01:08.948739 containerd[1533]: 2025-10-13 00:01:08.929 [INFO][2739] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"97c7db48-348c-4ea9-b189-7f14219279eb", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-cz8t5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calibbb0822ee1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:08.948739 containerd[1533]: 2025-10-13 00:01:08.929 [INFO][2739] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.193/32] ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" Oct 13 00:01:08.948833 containerd[1533]: 2025-10-13 00:01:08.929 [INFO][2739] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbb0822ee1d ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" Oct 13 00:01:08.948833 containerd[1533]: 2025-10-13 00:01:08.934 [INFO][2739] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" Oct 13 00:01:08.948883 containerd[1533]: 2025-10-13 00:01:08.935 [INFO][2739] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"97c7db48-348c-4ea9-b189-7f14219279eb", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e", Pod:"nginx-deployment-bb8f74bfb-cz8t5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calibbb0822ee1d", MAC:"ea:b1:ea:c2:9c:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:08.948930 containerd[1533]: 2025-10-13 00:01:08.946 [INFO][2739] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" Namespace="default" Pod="nginx-deployment-bb8f74bfb-cz8t5" WorkloadEndpoint="10.0.0.51-k8s-nginx--deployment--bb8f74bfb--cz8t5-eth0" Oct 13 00:01:08.965311 containerd[1533]: time="2025-10-13T00:01:08.965265457Z" level=info msg="connecting to shim f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e" address="unix:///run/containerd/s/40400c4e5eff272f1984d55265eafaacc66fe627775801a413b93884776ad4c2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:08.985996 systemd[1]: Started cri-containerd-f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e.scope - libcontainer container f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e. Oct 13 00:01:08.996064 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:09.018202 containerd[1533]: time="2025-10-13T00:01:09.018164497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-cz8t5,Uid:97c7db48-348c-4ea9-b189-7f14219279eb,Namespace:default,Attempt:0,} returns sandbox id \"f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e\"" Oct 13 00:01:09.020187 containerd[1533]: time="2025-10-13T00:01:09.020030337Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Oct 13 00:01:09.760713 kubelet[1864]: E1013 00:01:09.760641 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:09.830931 kubelet[1864]: I1013 00:01:09.830697 1864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52988a1f-3347-4dbf-9cc8-0e7e030022e3" path="/var/lib/kubelet/pods/52988a1f-3347-4dbf-9cc8-0e7e030022e3/volumes" Oct 13 00:01:09.883682 systemd-networkd[1435]: vxlan.calico: Link UP Oct 13 00:01:09.883809 systemd-networkd[1435]: vxlan.calico: Gained carrier Oct 13 00:01:09.963037 containerd[1533]: time="2025-10-13T00:01:09.962999937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768\" id:\"6728f3b442b8602551bc728a8f66fa1b427c63968add86a1e0ce5d1c24305ab6\" pid:2988 exit_status:1 exited_at:{seconds:1760313669 nanos:962515897}" Oct 13 00:01:10.446967 systemd-networkd[1435]: calibbb0822ee1d: Gained IPv6LL Oct 13 00:01:10.761023 kubelet[1864]: E1013 00:01:10.760902 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:10.953204 containerd[1533]: time="2025-10-13T00:01:10.953089137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f85ba90c599f054807b73e7c3fb58b141f3a06269e445c57c17a47c2f0c8768\" id:\"4fd576514bb6af0e6c43abaf22422c8427ca1c7b0b3d396c5091143ec38a23ef\" pid:3059 exit_status:1 exited_at:{seconds:1760313670 nanos:952781257}" Oct 13 00:01:11.406946 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Oct 13 00:01:11.749886 kubelet[1864]: E1013 00:01:11.749824 1864 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:11.761382 kubelet[1864]: E1013 00:01:11.761344 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:12.762407 kubelet[1864]: E1013 00:01:12.762371 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:12.829841 containerd[1533]: time="2025-10-13T00:01:12.829562897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-gg2z6,Uid:c0133bef-7fb4-4183-a390-03737bc1326e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:12.839906 containerd[1533]: time="2025-10-13T00:01:12.839512337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phvtw,Uid:ff46f70d-6961-4e41-b75b-3a39b35c2c50,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:12.974486 systemd-networkd[1435]: cali030f82bed86: Link UP Oct 13 00:01:12.975360 systemd-networkd[1435]: cali030f82bed86: Gained carrier Oct 13 00:01:12.993540 containerd[1533]: 2025-10-13 00:01:12.884 [INFO][3082] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0 calico-apiserver-5b686ccc97- calico-apiserver c0133bef-7fb4-4183-a390-03737bc1326e 965 0 2025-10-13 00:00:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b686ccc97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.51 calico-apiserver-5b686ccc97-gg2z6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali030f82bed86 [] [] }} ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-" Oct 13 00:01:12.993540 containerd[1533]: 2025-10-13 00:01:12.884 [INFO][3082] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:12.993540 containerd[1533]: 2025-10-13 00:01:12.924 [INFO][3112] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" HandleID="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.925 [INFO][3112] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" HandleID="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136330), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.51", "pod":"calico-apiserver-5b686ccc97-gg2z6", "timestamp":"2025-10-13 00:01:12.924446097 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.925 [INFO][3112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.925 [INFO][3112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.925 [INFO][3112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.936 [INFO][3112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" host="10.0.0.51" Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.942 [INFO][3112] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.948 [INFO][3112] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.951 [INFO][3112] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.953 [INFO][3112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:12.993741 containerd[1533]: 2025-10-13 00:01:12.953 [INFO][3112] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" host="10.0.0.51" Oct 13 00:01:12.994310 containerd[1533]: 2025-10-13 00:01:12.956 [INFO][3112] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c Oct 13 00:01:12.994310 containerd[1533]: 2025-10-13 00:01:12.961 [INFO][3112] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" host="10.0.0.51" Oct 13 00:01:12.994310 containerd[1533]: 2025-10-13 00:01:12.968 [INFO][3112] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.194/26] block=192.168.109.192/26 handle="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" host="10.0.0.51" Oct 13 00:01:12.994310 containerd[1533]: 2025-10-13 00:01:12.968 [INFO][3112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.194/26] handle="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" host="10.0.0.51" Oct 13 00:01:12.994310 containerd[1533]: 2025-10-13 00:01:12.969 [INFO][3112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:12.994310 containerd[1533]: 2025-10-13 00:01:12.969 [INFO][3112] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.194/26] IPv6=[] ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" HandleID="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:12.994460 containerd[1533]: 2025-10-13 00:01:12.971 [INFO][3082] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0", GenerateName:"calico-apiserver-5b686ccc97-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0133bef-7fb4-4183-a390-03737bc1326e", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b686ccc97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"calico-apiserver-5b686ccc97-gg2z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali030f82bed86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:12.994516 containerd[1533]: 2025-10-13 00:01:12.971 [INFO][3082] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.194/32] ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:12.994516 containerd[1533]: 2025-10-13 00:01:12.971 [INFO][3082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali030f82bed86 ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:12.994516 containerd[1533]: 2025-10-13 00:01:12.975 [INFO][3082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:12.994577 containerd[1533]: 2025-10-13 00:01:12.976 [INFO][3082] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0", GenerateName:"calico-apiserver-5b686ccc97-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0133bef-7fb4-4183-a390-03737bc1326e", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b686ccc97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c", Pod:"calico-apiserver-5b686ccc97-gg2z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali030f82bed86", MAC:"ca:47:5a:21:52:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:12.994622 containerd[1533]: 2025-10-13 00:01:12.989 [INFO][3082] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-gg2z6" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:13.066844 containerd[1533]: time="2025-10-13T00:01:13.066533737Z" level=info msg="connecting to shim f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" address="unix:///run/containerd/s/155044cf0be201445a89795e10e8c6c2f6eda1f87476207eefe9bed5060fa22f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:13.080429 systemd-networkd[1435]: cali146f0a4ad48: Link UP Oct 13 00:01:13.084800 systemd-networkd[1435]: cali146f0a4ad48: Gained carrier Oct 13 00:01:13.095699 containerd[1533]: 2025-10-13 00:01:12.881 [INFO][3093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0 coredns-66bc5c9577- kube-system ff46f70d-6961-4e41-b75b-3a39b35c2c50 968 0 2025-10-13 00:00:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.51 coredns-66bc5c9577-phvtw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali146f0a4ad48 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-" Oct 13 00:01:13.095699 containerd[1533]: 2025-10-13 00:01:12.882 [INFO][3093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" Oct 13 00:01:13.095699 containerd[1533]: 2025-10-13 00:01:12.926 [INFO][3111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" HandleID="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Workload="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:12.926 [INFO][3111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" HandleID="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Workload="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255600), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.51", "pod":"coredns-66bc5c9577-phvtw", "timestamp":"2025-10-13 00:01:12.926283057 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:12.926 [INFO][3111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:12.969 [INFO][3111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:12.969 [INFO][3111] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:13.036 [INFO][3111] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" host="10.0.0.51" Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:13.044 [INFO][3111] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:13.051 [INFO][3111] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:13.053 [INFO][3111] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:13.057 [INFO][3111] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:13.096303 containerd[1533]: 2025-10-13 00:01:13.057 [INFO][3111] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" host="10.0.0.51" Oct 13 00:01:13.096517 containerd[1533]: 2025-10-13 00:01:13.061 [INFO][3111] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f Oct 13 00:01:13.096517 containerd[1533]: 2025-10-13 00:01:13.066 [INFO][3111] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" host="10.0.0.51" Oct 13 00:01:13.096517 containerd[1533]: 2025-10-13 00:01:13.074 [INFO][3111] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.195/26] block=192.168.109.192/26 handle="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" host="10.0.0.51" Oct 13 00:01:13.096517 containerd[1533]: 2025-10-13 00:01:13.074 [INFO][3111] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.195/26] handle="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" host="10.0.0.51" Oct 13 00:01:13.096517 containerd[1533]: 2025-10-13 00:01:13.074 [INFO][3111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:13.096517 containerd[1533]: 2025-10-13 00:01:13.074 [INFO][3111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.195/26] IPv6=[] ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" HandleID="k8s-pod-network.5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Workload="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" Oct 13 00:01:13.096626 containerd[1533]: 2025-10-13 00:01:13.077 [INFO][3093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ff46f70d-6961-4e41-b75b-3a39b35c2c50", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"coredns-66bc5c9577-phvtw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali146f0a4ad48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:13.096626 containerd[1533]: 2025-10-13 00:01:13.077 [INFO][3093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.195/32] ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" Oct 13 00:01:13.096626 containerd[1533]: 2025-10-13 00:01:13.077 [INFO][3093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali146f0a4ad48 ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" Oct 13 00:01:13.096626 containerd[1533]: 2025-10-13 00:01:13.080 [INFO][3093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" Oct 13 00:01:13.096626 containerd[1533]: 2025-10-13 00:01:13.082 [INFO][3093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ff46f70d-6961-4e41-b75b-3a39b35c2c50", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f", Pod:"coredns-66bc5c9577-phvtw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali146f0a4ad48", MAC:"4a:e5:31:93:2e:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:13.096626 containerd[1533]: 2025-10-13 00:01:13.092 [INFO][3093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" Namespace="kube-system" Pod="coredns-66bc5c9577-phvtw" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--phvtw-eth0" Oct 13 00:01:13.114429 systemd[1]: Started cri-containerd-f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c.scope - libcontainer container f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c. Oct 13 00:01:13.125064 containerd[1533]: time="2025-10-13T00:01:13.125014937Z" level=info msg="connecting to shim 5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f" address="unix:///run/containerd/s/437d3b8ce7f0417ac9a56b3d1cc56bae90b13365bb6f6ef68a465463122b3a3e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:13.131064 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:13.148066 systemd[1]: Started cri-containerd-5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f.scope - libcontainer container 5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f. Oct 13 00:01:13.154571 containerd[1533]: time="2025-10-13T00:01:13.154523657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-gg2z6,Uid:c0133bef-7fb4-4183-a390-03737bc1326e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\"" Oct 13 00:01:13.163423 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:13.186336 containerd[1533]: time="2025-10-13T00:01:13.186275137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phvtw,Uid:ff46f70d-6961-4e41-b75b-3a39b35c2c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f\"" Oct 13 00:01:13.432080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514187289.mount: Deactivated successfully. Oct 13 00:01:13.763036 kubelet[1864]: E1013 00:01:13.762671 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:13.960637 containerd[1533]: time="2025-10-13T00:01:13.960575657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zhm2,Uid:9a5371c7-64f8-48a1-bd35-b322196d1b79,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:13.963128 containerd[1533]: time="2025-10-13T00:01:13.963094097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765759bb74-868z9,Uid:06a79750-ad5b-4923-88da-893494a6c052,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:14.047272 containerd[1533]: time="2025-10-13T00:01:14.046920208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:14.047749 containerd[1533]: time="2025-10-13T00:01:14.047717734Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=70015687" Oct 13 00:01:14.049579 containerd[1533]: time="2025-10-13T00:01:14.049539908Z" level=info msg="ImageCreate event name:\"sha256:e1e3942d93b7c9e68a5e902395859d4f53de5aa9a187cba800c72cee6f9cb03f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:14.053464 containerd[1533]: time="2025-10-13T00:01:14.053426257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:0c4ba30a5f6a65d2bbdf93f2eff51d5304fd8c7f92cfc83a135a226aa2cd96af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:14.054979 containerd[1533]: time="2025-10-13T00:01:14.054947269Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e1e3942d93b7c9e68a5e902395859d4f53de5aa9a187cba800c72cee6f9cb03f\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:0c4ba30a5f6a65d2bbdf93f2eff51d5304fd8c7f92cfc83a135a226aa2cd96af\", size \"70015565\" in 5.034517692s" Oct 13 00:01:14.055028 containerd[1533]: time="2025-10-13T00:01:14.054983669Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e1e3942d93b7c9e68a5e902395859d4f53de5aa9a187cba800c72cee6f9cb03f\"" Oct 13 00:01:14.055872 containerd[1533]: time="2025-10-13T00:01:14.055824355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:01:14.059366 containerd[1533]: time="2025-10-13T00:01:14.059330662Z" level=info msg="CreateContainer within sandbox \"f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Oct 13 00:01:14.069313 containerd[1533]: time="2025-10-13T00:01:14.069246338Z" level=info msg="Container ae662877713c1f7378121c5d087330c1b1d13c16f9d741ddb6d58843b9a698a1: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:14.087746 containerd[1533]: time="2025-10-13T00:01:14.087656438Z" level=info msg="CreateContainer within sandbox \"f7eddd17b3296343c0dc41dbb559275f9c3512bf8bd65afc9b590e5e89fb8f9e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ae662877713c1f7378121c5d087330c1b1d13c16f9d741ddb6d58843b9a698a1\"" Oct 13 00:01:14.088614 containerd[1533]: time="2025-10-13T00:01:14.088581605Z" level=info msg="StartContainer for \"ae662877713c1f7378121c5d087330c1b1d13c16f9d741ddb6d58843b9a698a1\"" Oct 13 00:01:14.089527 containerd[1533]: time="2025-10-13T00:01:14.089453652Z" level=info msg="connecting to shim ae662877713c1f7378121c5d087330c1b1d13c16f9d741ddb6d58843b9a698a1" address="unix:///run/containerd/s/40400c4e5eff272f1984d55265eafaacc66fe627775801a413b93884776ad4c2" protocol=ttrpc version=3 Oct 13 00:01:14.094973 systemd-networkd[1435]: cali030f82bed86: Gained IPv6LL Oct 13 00:01:14.101207 systemd-networkd[1435]: calie68424a3e48: Link UP Oct 13 00:01:14.101737 systemd-networkd[1435]: calie68424a3e48: Gained carrier Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.019 [INFO][3248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0 calico-apiserver-765759bb74- calico-apiserver 06a79750-ad5b-4923-88da-893494a6c052 971 0 2025-10-13 00:00:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765759bb74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.51 calico-apiserver-765759bb74-868z9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie68424a3e48 [] [] }} ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.019 [INFO][3248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.049 [INFO][3285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" HandleID="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Workload="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.050 [INFO][3285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" HandleID="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Workload="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.51", "pod":"calico-apiserver-765759bb74-868z9", "timestamp":"2025-10-13 00:01:14.04985319 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.050 [INFO][3285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.050 [INFO][3285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.050 [INFO][3285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.062 [INFO][3285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.069 [INFO][3285] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.075 [INFO][3285] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.078 [INFO][3285] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.083 [INFO][3285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.083 [INFO][3285] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.085 [INFO][3285] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927 Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.091 [INFO][3285] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.097 [INFO][3285] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.196/26] block=192.168.109.192/26 handle="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.097 [INFO][3285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.196/26] handle="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" host="10.0.0.51" Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.097 [INFO][3285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:14.112977 containerd[1533]: 2025-10-13 00:01:14.097 [INFO][3285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.196/26] IPv6=[] ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" HandleID="k8s-pod-network.e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Workload="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" Oct 13 00:01:14.113914 containerd[1533]: 2025-10-13 00:01:14.099 [INFO][3248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0", GenerateName:"calico-apiserver-765759bb74-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a79750-ad5b-4923-88da-893494a6c052", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765759bb74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"calico-apiserver-765759bb74-868z9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie68424a3e48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:14.113914 containerd[1533]: 2025-10-13 00:01:14.099 [INFO][3248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.196/32] ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" Oct 13 00:01:14.113914 containerd[1533]: 2025-10-13 00:01:14.099 [INFO][3248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie68424a3e48 ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" Oct 13 00:01:14.113914 containerd[1533]: 2025-10-13 00:01:14.102 [INFO][3248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" Oct 13 00:01:14.113914 containerd[1533]: 2025-10-13 00:01:14.103 [INFO][3248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0", GenerateName:"calico-apiserver-765759bb74-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a79750-ad5b-4923-88da-893494a6c052", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765759bb74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927", Pod:"calico-apiserver-765759bb74-868z9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie68424a3e48", MAC:"0e:1f:01:96:c3:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:14.113914 containerd[1533]: 2025-10-13 00:01:14.111 [INFO][3248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" Namespace="calico-apiserver" Pod="calico-apiserver-765759bb74-868z9" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--765759bb74--868z9-eth0" Oct 13 00:01:14.135003 systemd[1]: Started cri-containerd-ae662877713c1f7378121c5d087330c1b1d13c16f9d741ddb6d58843b9a698a1.scope - libcontainer container ae662877713c1f7378121c5d087330c1b1d13c16f9d741ddb6d58843b9a698a1. Oct 13 00:01:14.167022 containerd[1533]: time="2025-10-13T00:01:14.166961322Z" level=info msg="StartContainer for \"ae662877713c1f7378121c5d087330c1b1d13c16f9d741ddb6d58843b9a698a1\" returns successfully" Oct 13 00:01:14.168318 containerd[1533]: time="2025-10-13T00:01:14.168178411Z" level=info msg="connecting to shim e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927" address="unix:///run/containerd/s/6d8211eccbee641b0619ca2c5fb37185e08ca65fd8faf6603774034b19a9db08" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:14.201030 systemd[1]: Started cri-containerd-e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927.scope - libcontainer container e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927. Oct 13 00:01:14.208856 systemd-networkd[1435]: cali293d87da418: Link UP Oct 13 00:01:14.209324 systemd-networkd[1435]: cali293d87da418: Gained carrier Oct 13 00:01:14.219214 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.022 [INFO][3242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-csi--node--driver--8zhm2-eth0 csi-node-driver- calico-system 9a5371c7-64f8-48a1-bd35-b322196d1b79 897 0 2025-10-13 00:00:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.51 csi-node-driver-8zhm2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali293d87da418 [] [] }} ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.023 [INFO][3242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.058 [INFO][3287] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" HandleID="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Workload="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.058 [INFO][3287] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" HandleID="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Workload="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.51", "pod":"csi-node-driver-8zhm2", "timestamp":"2025-10-13 00:01:14.058342815 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.058 [INFO][3287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.097 [INFO][3287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.097 [INFO][3287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.166 [INFO][3287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.172 [INFO][3287] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.177 [INFO][3287] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.181 [INFO][3287] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.185 [INFO][3287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.185 [INFO][3287] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.187 [INFO][3287] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680 Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.192 [INFO][3287] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.199 [INFO][3287] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.197/26] block=192.168.109.192/26 handle="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.199 [INFO][3287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.197/26] handle="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" host="10.0.0.51" Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.199 [INFO][3287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:14.223372 containerd[1533]: 2025-10-13 00:01:14.199 [INFO][3287] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.197/26] IPv6=[] ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" HandleID="k8s-pod-network.6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Workload="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" Oct 13 00:01:14.223934 containerd[1533]: 2025-10-13 00:01:14.204 [INFO][3242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-csi--node--driver--8zhm2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a5371c7-64f8-48a1-bd35-b322196d1b79", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"csi-node-driver-8zhm2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali293d87da418", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:14.223934 containerd[1533]: 2025-10-13 00:01:14.204 [INFO][3242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.197/32] ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" Oct 13 00:01:14.223934 containerd[1533]: 2025-10-13 00:01:14.204 [INFO][3242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali293d87da418 ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" Oct 13 00:01:14.223934 containerd[1533]: 2025-10-13 00:01:14.212 [INFO][3242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" Oct 13 00:01:14.223934 containerd[1533]: 2025-10-13 00:01:14.212 [INFO][3242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-csi--node--driver--8zhm2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a5371c7-64f8-48a1-bd35-b322196d1b79", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680", Pod:"csi-node-driver-8zhm2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali293d87da418", MAC:"5e:2e:e5:29:79:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:14.223934 containerd[1533]: 2025-10-13 00:01:14.221 [INFO][3242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" Namespace="calico-system" Pod="csi-node-driver-8zhm2" WorkloadEndpoint="10.0.0.51-k8s-csi--node--driver--8zhm2-eth0" Oct 13 00:01:14.244013 containerd[1533]: time="2025-10-13T00:01:14.243949788Z" level=info msg="connecting to shim 6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680" address="unix:///run/containerd/s/70ce6089643c02ef5b379a361df7fd2e064b68dba2584b7df5ba3a68f096a2ab" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:14.252643 containerd[1533]: time="2025-10-13T00:01:14.252592894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765759bb74-868z9,Uid:06a79750-ad5b-4923-88da-893494a6c052,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927\"" Oct 13 00:01:14.274976 systemd[1]: Started cri-containerd-6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680.scope - libcontainer container 6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680. Oct 13 00:01:14.284553 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:14.307206 containerd[1533]: time="2025-10-13T00:01:14.306681506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zhm2,Uid:9a5371c7-64f8-48a1-bd35-b322196d1b79,Namespace:calico-system,Attempt:0,} returns sandbox id \"6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680\"" Oct 13 00:01:14.671087 systemd-networkd[1435]: cali146f0a4ad48: Gained IPv6LL Oct 13 00:01:14.763180 kubelet[1864]: E1013 00:01:14.763133 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:14.827730 containerd[1533]: time="2025-10-13T00:01:14.827695033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f478b884-ghpv9,Uid:8a807673-1d24-4905-967e-6ee4665f380d,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:14.904408 kubelet[1864]: I1013 00:01:14.904338 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-cz8t5" podStartSLOduration=1.868245719 podStartE2EDuration="6.904321936s" podCreationTimestamp="2025-10-13 00:01:08 +0000 UTC" firstStartedPulling="2025-10-13 00:01:09.019589457 +0000 UTC m=+18.323180401" lastFinishedPulling="2025-10-13 00:01:14.055665674 +0000 UTC m=+23.359256618" observedRunningTime="2025-10-13 00:01:14.904180095 +0000 UTC m=+24.207771039" watchObservedRunningTime="2025-10-13 00:01:14.904321936 +0000 UTC m=+24.207912840" Oct 13 00:01:14.935761 systemd-networkd[1435]: cali372fe045c40: Link UP Oct 13 00:01:14.936022 systemd-networkd[1435]: cali372fe045c40: Gained carrier Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.862 [INFO][3468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0 calico-kube-controllers-86f478b884- calico-system 8a807673-1d24-4905-967e-6ee4665f380d 963 0 2025-10-13 00:00:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86f478b884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.51 calico-kube-controllers-86f478b884-ghpv9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali372fe045c40 [] [] }} ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.863 [INFO][3468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.888 [INFO][3482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" HandleID="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Workload="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.888 [INFO][3482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" HandleID="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Workload="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.51", "pod":"calico-kube-controllers-86f478b884-ghpv9", "timestamp":"2025-10-13 00:01:14.888150213 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.888 [INFO][3482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.888 [INFO][3482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.888 [INFO][3482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.898 [INFO][3482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.906 [INFO][3482] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.912 [INFO][3482] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.915 [INFO][3482] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.917 [INFO][3482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.917 [INFO][3482] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.919 [INFO][3482] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542 Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.924 [INFO][3482] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.931 [INFO][3482] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.198/26] block=192.168.109.192/26 handle="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.931 [INFO][3482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.198/26] handle="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" host="10.0.0.51" Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.931 [INFO][3482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:14.951168 containerd[1533]: 2025-10-13 00:01:14.931 [INFO][3482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.198/26] IPv6=[] ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" HandleID="k8s-pod-network.d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Workload="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" Oct 13 00:01:14.951775 containerd[1533]: 2025-10-13 00:01:14.933 [INFO][3468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0", GenerateName:"calico-kube-controllers-86f478b884-", Namespace:"calico-system", SelfLink:"", UID:"8a807673-1d24-4905-967e-6ee4665f380d", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f478b884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"calico-kube-controllers-86f478b884-ghpv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali372fe045c40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:14.951775 containerd[1533]: 2025-10-13 00:01:14.933 [INFO][3468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.198/32] ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" Oct 13 00:01:14.951775 containerd[1533]: 2025-10-13 00:01:14.933 [INFO][3468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali372fe045c40 ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" Oct 13 00:01:14.951775 containerd[1533]: 2025-10-13 00:01:14.936 [INFO][3468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" Oct 13 00:01:14.951775 containerd[1533]: 2025-10-13 00:01:14.939 [INFO][3468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0", GenerateName:"calico-kube-controllers-86f478b884-", Namespace:"calico-system", SelfLink:"", UID:"8a807673-1d24-4905-967e-6ee4665f380d", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f478b884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542", Pod:"calico-kube-controllers-86f478b884-ghpv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali372fe045c40", MAC:"a2:7c:33:40:4c:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:14.951775 containerd[1533]: 2025-10-13 00:01:14.948 [INFO][3468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" Namespace="calico-system" Pod="calico-kube-controllers-86f478b884-ghpv9" WorkloadEndpoint="10.0.0.51-k8s-calico--kube--controllers--86f478b884--ghpv9-eth0" Oct 13 00:01:14.978843 containerd[1533]: time="2025-10-13T00:01:14.978503861Z" level=info msg="connecting to shim d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542" address="unix:///run/containerd/s/ec9326948b78a2b7bdf0ac07190f341f13de15f38c682efcb8f2776023bae35f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:15.001020 systemd[1]: Started cri-containerd-d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542.scope - libcontainer container d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542. Oct 13 00:01:15.012277 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:15.040961 containerd[1533]: time="2025-10-13T00:01:15.040914877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f478b884-ghpv9,Uid:8a807673-1d24-4905-967e-6ee4665f380d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542\"" Oct 13 00:01:15.311115 systemd-networkd[1435]: calie68424a3e48: Gained IPv6LL Oct 13 00:01:15.764277 kubelet[1864]: E1013 00:01:15.764235 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:15.830614 containerd[1533]: time="2025-10-13T00:01:15.830340433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pzdj6,Uid:5066ee12-3d1a-4745-bdf2-f00011135a06,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:15.976625 systemd-networkd[1435]: cali1abbdc5ede8: Link UP Oct 13 00:01:15.977539 systemd-networkd[1435]: cali1abbdc5ede8: Gained carrier Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.887 [INFO][3554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0 coredns-66bc5c9577- kube-system 5066ee12-3d1a-4745-bdf2-f00011135a06 964 0 2025-10-13 00:00:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.51 coredns-66bc5c9577-pzdj6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1abbdc5ede8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.887 [INFO][3554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.917 [INFO][3564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" HandleID="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Workload="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.917 [INFO][3564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" HandleID="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Workload="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d680), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.51", "pod":"coredns-66bc5c9577-pzdj6", "timestamp":"2025-10-13 00:01:15.917490575 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.917 [INFO][3564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.917 [INFO][3564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.917 [INFO][3564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.930 [INFO][3564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.941 [INFO][3564] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.948 [INFO][3564] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.950 [INFO][3564] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.953 [INFO][3564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.953 [INFO][3564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.957 [INFO][3564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72 Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.962 [INFO][3564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.969 [INFO][3564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.199/26] block=192.168.109.192/26 handle="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.969 [INFO][3564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.199/26] handle="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" host="10.0.0.51" Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.969 [INFO][3564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:15.995207 containerd[1533]: 2025-10-13 00:01:15.969 [INFO][3564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.199/26] IPv6=[] ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" HandleID="k8s-pod-network.d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Workload="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" Oct 13 00:01:15.995942 containerd[1533]: 2025-10-13 00:01:15.972 [INFO][3554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5066ee12-3d1a-4745-bdf2-f00011135a06", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"coredns-66bc5c9577-pzdj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1abbdc5ede8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:15.995942 containerd[1533]: 2025-10-13 00:01:15.972 [INFO][3554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.199/32] ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" Oct 13 00:01:15.995942 containerd[1533]: 2025-10-13 00:01:15.972 [INFO][3554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1abbdc5ede8 ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" Oct 13 00:01:15.995942 containerd[1533]: 2025-10-13 00:01:15.977 [INFO][3554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" Oct 13 00:01:15.995942 containerd[1533]: 2025-10-13 00:01:15.977 [INFO][3554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5066ee12-3d1a-4745-bdf2-f00011135a06", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72", Pod:"coredns-66bc5c9577-pzdj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1abbdc5ede8", MAC:"ce:6f:b7:b3:4c:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:15.995942 containerd[1533]: 2025-10-13 00:01:15.992 [INFO][3554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" Namespace="kube-system" Pod="coredns-66bc5c9577-pzdj6" WorkloadEndpoint="10.0.0.51-k8s-coredns--66bc5c9577--pzdj6-eth0" Oct 13 00:01:16.032453 containerd[1533]: time="2025-10-13T00:01:16.032323900Z" level=info msg="connecting to shim d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72" address="unix:///run/containerd/s/ede98a3179029a2cc880aab02322edacda11b830e8a6aabe68dd5239ec6f67f3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:16.063995 systemd[1]: Started cri-containerd-d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72.scope - libcontainer container d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72. Oct 13 00:01:16.081766 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:16.102763 containerd[1533]: time="2025-10-13T00:01:16.102719451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pzdj6,Uid:5066ee12-3d1a-4745-bdf2-f00011135a06,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72\"" Oct 13 00:01:16.270912 systemd-networkd[1435]: cali293d87da418: Gained IPv6LL Oct 13 00:01:16.548698 containerd[1533]: time="2025-10-13T00:01:16.548071192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:16.548698 containerd[1533]: time="2025-10-13T00:01:16.548662716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 00:01:16.549364 containerd[1533]: time="2025-10-13T00:01:16.549341080Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:16.551660 containerd[1533]: time="2025-10-13T00:01:16.551462775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:16.552322 containerd[1533]: time="2025-10-13T00:01:16.552275020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.496419264s" Oct 13 00:01:16.552322 containerd[1533]: time="2025-10-13T00:01:16.552310540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:01:16.553298 containerd[1533]: time="2025-10-13T00:01:16.553217386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 00:01:16.556448 containerd[1533]: time="2025-10-13T00:01:16.555974165Z" level=info msg="CreateContainer within sandbox \"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:01:16.561999 containerd[1533]: time="2025-10-13T00:01:16.561965765Z" level=info msg="Container 55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:16.570531 containerd[1533]: time="2025-10-13T00:01:16.570465782Z" level=info msg="CreateContainer within sandbox \"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\"" Oct 13 00:01:16.571058 containerd[1533]: time="2025-10-13T00:01:16.571035506Z" level=info msg="StartContainer for \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\"" Oct 13 00:01:16.572280 containerd[1533]: time="2025-10-13T00:01:16.572120793Z" level=info msg="connecting to shim 55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51" address="unix:///run/containerd/s/155044cf0be201445a89795e10e8c6c2f6eda1f87476207eefe9bed5060fa22f" protocol=ttrpc version=3 Oct 13 00:01:16.589079 systemd[1]: Started cri-containerd-55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51.scope - libcontainer container 55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51. Oct 13 00:01:16.628435 containerd[1533]: time="2025-10-13T00:01:16.628400129Z" level=info msg="StartContainer for \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" returns successfully" Oct 13 00:01:16.654962 systemd-networkd[1435]: cali372fe045c40: Gained IPv6LL Oct 13 00:01:16.764624 kubelet[1864]: E1013 00:01:16.764575 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:16.832630 containerd[1533]: time="2025-10-13T00:01:16.832495495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-9nphb,Uid:2c33bcce-e6aa-438f-81c0-a1243e0458a8,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:16.834009 containerd[1533]: time="2025-10-13T00:01:16.833905225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-q9cxb,Uid:abe04e5d-16c5-445a-94e4-1f8f8f789715,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:16.916947 kubelet[1864]: I1013 00:01:16.916721 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b686ccc97-gg2z6" podStartSLOduration=46.51998753 podStartE2EDuration="49.916706339s" podCreationTimestamp="2025-10-13 00:00:27 +0000 UTC" firstStartedPulling="2025-10-13 00:01:13.156395217 +0000 UTC m=+22.459986161" lastFinishedPulling="2025-10-13 00:01:16.553114026 +0000 UTC m=+25.856704970" observedRunningTime="2025-10-13 00:01:16.916505618 +0000 UTC m=+26.220096562" watchObservedRunningTime="2025-10-13 00:01:16.916706339 +0000 UTC m=+26.220297283" Oct 13 00:01:16.979219 systemd-networkd[1435]: cali9ac0b1f2a6f: Link UP Oct 13 00:01:16.979930 systemd-networkd[1435]: cali9ac0b1f2a6f: Gained carrier Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.895 [INFO][3668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0 calico-apiserver-5b686ccc97- calico-apiserver abe04e5d-16c5-445a-94e4-1f8f8f789715 966 0 2025-10-13 00:00:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b686ccc97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.51 calico-apiserver-5b686ccc97-q9cxb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ac0b1f2a6f [] [] }} ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.895 [INFO][3668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.929 [INFO][3705] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" HandleID="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.929 [INFO][3705] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" HandleID="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d59b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.51", "pod":"calico-apiserver-5b686ccc97-q9cxb", "timestamp":"2025-10-13 00:01:16.929084942 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.929 [INFO][3705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.929 [INFO][3705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.929 [INFO][3705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.942 [INFO][3705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.949 [INFO][3705] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.954 [INFO][3705] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.956 [INFO][3705] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.958 [INFO][3705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.958 [INFO][3705] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.960 [INFO][3705] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4 Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.964 [INFO][3705] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.971 [INFO][3705] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.200/26] block=192.168.109.192/26 handle="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.971 [INFO][3705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.200/26] handle="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" host="10.0.0.51" Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.971 [INFO][3705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:16.993455 containerd[1533]: 2025-10-13 00:01:16.971 [INFO][3705] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.200/26] IPv6=[] ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" HandleID="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:16.994260 containerd[1533]: 2025-10-13 00:01:16.973 [INFO][3668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0", GenerateName:"calico-apiserver-5b686ccc97-", Namespace:"calico-apiserver", SelfLink:"", UID:"abe04e5d-16c5-445a-94e4-1f8f8f789715", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b686ccc97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"calico-apiserver-5b686ccc97-q9cxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ac0b1f2a6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:16.994260 containerd[1533]: 2025-10-13 00:01:16.973 [INFO][3668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.200/32] ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:16.994260 containerd[1533]: 2025-10-13 00:01:16.973 [INFO][3668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ac0b1f2a6f ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:16.994260 containerd[1533]: 2025-10-13 00:01:16.980 [INFO][3668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:16.994260 containerd[1533]: 2025-10-13 00:01:16.981 [INFO][3668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0", GenerateName:"calico-apiserver-5b686ccc97-", Namespace:"calico-apiserver", SelfLink:"", UID:"abe04e5d-16c5-445a-94e4-1f8f8f789715", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b686ccc97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4", Pod:"calico-apiserver-5b686ccc97-q9cxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ac0b1f2a6f", MAC:"6e:f1:55:3b:89:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:16.994260 containerd[1533]: 2025-10-13 00:01:16.990 [INFO][3668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Namespace="calico-apiserver" Pod="calico-apiserver-5b686ccc97-q9cxb" WorkloadEndpoint="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:17.038408 containerd[1533]: time="2025-10-13T00:01:17.038298097Z" level=info msg="connecting to shim 52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" address="unix:///run/containerd/s/f165bdb1ce0ce1637c99bcab898845cd74e6fd1dd2a2cec04508bb263b6ef3ab" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:17.065004 systemd[1]: Started cri-containerd-52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4.scope - libcontainer container 52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4. Oct 13 00:01:17.076199 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:17.088147 systemd-networkd[1435]: caliceff322266c: Link UP Oct 13 00:01:17.089683 systemd-networkd[1435]: caliceff322266c: Gained carrier Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:16.888 [INFO][3667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0 goldmane-854f97d977- calico-system 2c33bcce-e6aa-438f-81c0-a1243e0458a8 967 0 2025-10-13 00:00:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 10.0.0.51 goldmane-854f97d977-9nphb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliceff322266c [] [] }} ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:16.888 [INFO][3667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:16.933 [INFO][3699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" HandleID="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Workload="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:16.933 [INFO][3699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" HandleID="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Workload="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042dbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.51", "pod":"goldmane-854f97d977-9nphb", "timestamp":"2025-10-13 00:01:16.933732573 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:16.933 [INFO][3699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:16.971 [INFO][3699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:16.971 [INFO][3699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.042 [INFO][3699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.049 [INFO][3699] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.056 [INFO][3699] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.060 [INFO][3699] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.064 [INFO][3699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.066 [INFO][3699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.068 [INFO][3699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.071 [INFO][3699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.079 [INFO][3699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.201/26] block=192.168.109.192/26 handle="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.079 [INFO][3699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.201/26] handle="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" host="10.0.0.51" Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.080 [INFO][3699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:17.109579 containerd[1533]: 2025-10-13 00:01:17.081 [INFO][3699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.201/26] IPv6=[] ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" HandleID="k8s-pod-network.76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Workload="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" Oct 13 00:01:17.111066 containerd[1533]: 2025-10-13 00:01:17.084 [INFO][3667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"2c33bcce-e6aa-438f-81c0-a1243e0458a8", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"goldmane-854f97d977-9nphb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliceff322266c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:17.111066 containerd[1533]: 2025-10-13 00:01:17.084 [INFO][3667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.201/32] ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" Oct 13 00:01:17.111066 containerd[1533]: 2025-10-13 00:01:17.084 [INFO][3667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceff322266c ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" Oct 13 00:01:17.111066 containerd[1533]: 2025-10-13 00:01:17.088 [INFO][3667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" Oct 13 00:01:17.111066 containerd[1533]: 2025-10-13 00:01:17.088 [INFO][3667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"2c33bcce-e6aa-438f-81c0-a1243e0458a8", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d", Pod:"goldmane-854f97d977-9nphb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliceff322266c", MAC:"4e:c0:c3:9d:1c:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:17.111066 containerd[1533]: 2025-10-13 00:01:17.105 [INFO][3667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" Namespace="calico-system" Pod="goldmane-854f97d977-9nphb" WorkloadEndpoint="10.0.0.51-k8s-goldmane--854f97d977--9nphb-eth0" Oct 13 00:01:17.111452 containerd[1533]: time="2025-10-13T00:01:17.111412876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b686ccc97-q9cxb,Uid:abe04e5d-16c5-445a-94e4-1f8f8f789715,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\"" Oct 13 00:01:17.120069 containerd[1533]: time="2025-10-13T00:01:17.119681768Z" level=info msg="CreateContainer within sandbox \"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:01:17.145308 containerd[1533]: time="2025-10-13T00:01:17.145265688Z" level=info msg="Container 3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:17.151027 containerd[1533]: time="2025-10-13T00:01:17.150975804Z" level=info msg="connecting to shim 76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d" address="unix:///run/containerd/s/60d7544a03dfac1e5ebba490367fd1e9eac5d99d5a207fda3d9b71b15cc9d058" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:17.155755 containerd[1533]: time="2025-10-13T00:01:17.154552026Z" level=info msg="CreateContainer within sandbox \"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\"" Oct 13 00:01:17.155755 containerd[1533]: time="2025-10-13T00:01:17.155648113Z" level=info msg="StartContainer for \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\"" Oct 13 00:01:17.156675 containerd[1533]: time="2025-10-13T00:01:17.156623919Z" level=info msg="connecting to shim 3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0" address="unix:///run/containerd/s/f165bdb1ce0ce1637c99bcab898845cd74e6fd1dd2a2cec04508bb263b6ef3ab" protocol=ttrpc version=3 Oct 13 00:01:17.179970 systemd[1]: Started cri-containerd-3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0.scope - libcontainer container 3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0. Oct 13 00:01:17.181356 systemd[1]: Started cri-containerd-76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d.scope - libcontainer container 76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d. Oct 13 00:01:17.197552 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:17.249056 containerd[1533]: time="2025-10-13T00:01:17.249000459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-9nphb,Uid:2c33bcce-e6aa-438f-81c0-a1243e0458a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d\"" Oct 13 00:01:17.249707 containerd[1533]: time="2025-10-13T00:01:17.249473502Z" level=info msg="StartContainer for \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" returns successfully" Oct 13 00:01:17.765177 kubelet[1864]: E1013 00:01:17.765142 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:17.932150 kubelet[1864]: I1013 00:01:17.932111 1864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:01:17.999916 systemd-networkd[1435]: cali1abbdc5ede8: Gained IPv6LL Oct 13 00:01:18.087480 containerd[1533]: time="2025-10-13T00:01:18.087360725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:18.090818 containerd[1533]: time="2025-10-13T00:01:18.087427485Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 13 00:01:18.091337 containerd[1533]: time="2025-10-13T00:01:18.091307628Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:18.095727 containerd[1533]: time="2025-10-13T00:01:18.095680374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:18.097780 containerd[1533]: time="2025-10-13T00:01:18.097738186Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.54449188s" Oct 13 00:01:18.097780 containerd[1533]: time="2025-10-13T00:01:18.097772586Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 13 00:01:18.099589 containerd[1533]: time="2025-10-13T00:01:18.099561717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:01:18.103190 containerd[1533]: time="2025-10-13T00:01:18.102972017Z" level=info msg="CreateContainer within sandbox \"5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:01:18.117916 containerd[1533]: time="2025-10-13T00:01:18.117863904Z" level=info msg="Container edca6b133caa419222229a46fdd1483f658e105fc5994e7af8df8b3ef21736c1: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:18.121164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745770910.mount: Deactivated successfully. Oct 13 00:01:18.127758 containerd[1533]: time="2025-10-13T00:01:18.127717002Z" level=info msg="CreateContainer within sandbox \"5410ae6b2b7166769ef8df2b9f4ecad5842928347e9fff0215ccb678f58a9f1f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edca6b133caa419222229a46fdd1483f658e105fc5994e7af8df8b3ef21736c1\"" Oct 13 00:01:18.128834 containerd[1533]: time="2025-10-13T00:01:18.128721888Z" level=info msg="StartContainer for \"edca6b133caa419222229a46fdd1483f658e105fc5994e7af8df8b3ef21736c1\"" Oct 13 00:01:18.129861 containerd[1533]: time="2025-10-13T00:01:18.129774534Z" level=info msg="connecting to shim edca6b133caa419222229a46fdd1483f658e105fc5994e7af8df8b3ef21736c1" address="unix:///run/containerd/s/437d3b8ce7f0417ac9a56b3d1cc56bae90b13365bb6f6ef68a465463122b3a3e" protocol=ttrpc version=3 Oct 13 00:01:18.153985 systemd[1]: Started cri-containerd-edca6b133caa419222229a46fdd1483f658e105fc5994e7af8df8b3ef21736c1.scope - libcontainer container edca6b133caa419222229a46fdd1483f658e105fc5994e7af8df8b3ef21736c1. Oct 13 00:01:18.185091 containerd[1533]: time="2025-10-13T00:01:18.185048140Z" level=info msg="StartContainer for \"edca6b133caa419222229a46fdd1483f658e105fc5994e7af8df8b3ef21736c1\" returns successfully" Oct 13 00:01:18.465871 containerd[1533]: time="2025-10-13T00:01:18.465822991Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:18.466552 containerd[1533]: time="2025-10-13T00:01:18.466511235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 00:01:18.468446 containerd[1533]: time="2025-10-13T00:01:18.468319726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 368.719809ms" Oct 13 00:01:18.468446 containerd[1533]: time="2025-10-13T00:01:18.468356526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:01:18.469312 containerd[1533]: time="2025-10-13T00:01:18.469290051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 00:01:18.472989 containerd[1533]: time="2025-10-13T00:01:18.472927073Z" level=info msg="CreateContainer within sandbox \"e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:01:18.489434 containerd[1533]: time="2025-10-13T00:01:18.489391210Z" level=info msg="Container d316fd4e610eae5fa4efe859b1394f1951d47ec0b7f0f303b57dc19b1ba6097d: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:18.495525 containerd[1533]: time="2025-10-13T00:01:18.495396365Z" level=info msg="CreateContainer within sandbox \"e0169602d022a2f086f8590e1600f1c445b0b975ab8817d44223e839114e1927\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d316fd4e610eae5fa4efe859b1394f1951d47ec0b7f0f303b57dc19b1ba6097d\"" Oct 13 00:01:18.495918 containerd[1533]: time="2025-10-13T00:01:18.495892728Z" level=info msg="StartContainer for \"d316fd4e610eae5fa4efe859b1394f1951d47ec0b7f0f303b57dc19b1ba6097d\"" Oct 13 00:01:18.497291 containerd[1533]: time="2025-10-13T00:01:18.497254776Z" level=info msg="connecting to shim d316fd4e610eae5fa4efe859b1394f1951d47ec0b7f0f303b57dc19b1ba6097d" address="unix:///run/containerd/s/6d8211eccbee641b0619ca2c5fb37185e08ca65fd8faf6603774034b19a9db08" protocol=ttrpc version=3 Oct 13 00:01:18.521972 systemd[1]: Started cri-containerd-d316fd4e610eae5fa4efe859b1394f1951d47ec0b7f0f303b57dc19b1ba6097d.scope - libcontainer container d316fd4e610eae5fa4efe859b1394f1951d47ec0b7f0f303b57dc19b1ba6097d. Oct 13 00:01:18.562114 containerd[1533]: time="2025-10-13T00:01:18.562074837Z" level=info msg="StartContainer for \"d316fd4e610eae5fa4efe859b1394f1951d47ec0b7f0f303b57dc19b1ba6097d\" returns successfully" Oct 13 00:01:18.766336 kubelet[1864]: E1013 00:01:18.766248 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:18.767190 systemd-networkd[1435]: caliceff322266c: Gained IPv6LL Oct 13 00:01:18.830917 systemd-networkd[1435]: cali9ac0b1f2a6f: Gained IPv6LL Oct 13 00:01:18.954572 kubelet[1864]: I1013 00:01:18.953895 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b686ccc97-q9cxb" podStartSLOduration=51.953876062 podStartE2EDuration="51.953876062s" podCreationTimestamp="2025-10-13 00:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:01:17.943439976 +0000 UTC m=+27.247030920" watchObservedRunningTime="2025-10-13 00:01:18.953876062 +0000 UTC m=+28.257467006" Oct 13 00:01:18.966515 kubelet[1864]: I1013 00:01:18.966447 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-phvtw" podStartSLOduration=56.055849481 podStartE2EDuration="1m0.966429296s" podCreationTimestamp="2025-10-13 00:00:18 +0000 UTC" firstStartedPulling="2025-10-13 00:01:13.188186337 +0000 UTC m=+22.491777281" lastFinishedPulling="2025-10-13 00:01:18.098766152 +0000 UTC m=+27.402357096" observedRunningTime="2025-10-13 00:01:18.954346744 +0000 UTC m=+28.257937688" watchObservedRunningTime="2025-10-13 00:01:18.966429296 +0000 UTC m=+28.270020240" Oct 13 00:01:18.967351 kubelet[1864]: I1013 00:01:18.967260 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-765759bb74-868z9" podStartSLOduration=46.75165899 podStartE2EDuration="50.96725094s" podCreationTimestamp="2025-10-13 00:00:28 +0000 UTC" firstStartedPulling="2025-10-13 00:01:14.253594821 +0000 UTC m=+23.557185765" lastFinishedPulling="2025-10-13 00:01:18.469186771 +0000 UTC m=+27.772777715" observedRunningTime="2025-10-13 00:01:18.96716622 +0000 UTC m=+28.270757204" watchObservedRunningTime="2025-10-13 00:01:18.96725094 +0000 UTC m=+28.270841884" Oct 13 00:01:19.767094 kubelet[1864]: E1013 00:01:19.767027 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:19.947111 kubelet[1864]: I1013 00:01:19.946850 1864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:01:20.280213 systemd[1]: Created slice kubepods-besteffort-podb0881af2_f189_4951_9781_74f543a56e76.slice - libcontainer container kubepods-besteffort-podb0881af2_f189_4951_9781_74f543a56e76.slice. Oct 13 00:01:20.365584 kubelet[1864]: I1013 00:01:20.365525 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b0881af2-f189-4951-9781-74f543a56e76-data\") pod \"nfs-server-provisioner-0\" (UID: \"b0881af2-f189-4951-9781-74f543a56e76\") " pod="default/nfs-server-provisioner-0" Oct 13 00:01:20.365584 kubelet[1864]: I1013 00:01:20.365575 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8k25\" (UniqueName: \"kubernetes.io/projected/b0881af2-f189-4951-9781-74f543a56e76-kube-api-access-r8k25\") pod \"nfs-server-provisioner-0\" (UID: \"b0881af2-f189-4951-9781-74f543a56e76\") " pod="default/nfs-server-provisioner-0" Oct 13 00:01:20.585331 containerd[1533]: time="2025-10-13T00:01:20.585230153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b0881af2-f189-4951-9781-74f543a56e76,Namespace:default,Attempt:0,}" Oct 13 00:01:20.722697 systemd-networkd[1435]: cali60e51b789ff: Link UP Oct 13 00:01:20.722914 systemd-networkd[1435]: cali60e51b789ff: Gained carrier Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.628 [INFO][3998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b0881af2-f189-4951-9781-74f543a56e76 1204 0 2025-10-13 00:01:20 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.51 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.628 [INFO][3998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.658 [INFO][4014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" HandleID="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Workload="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.658 [INFO][4014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" HandleID="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Workload="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.51", "pod":"nfs-server-provisioner-0", "timestamp":"2025-10-13 00:01:20.658410732 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.658 [INFO][4014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.658 [INFO][4014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.658 [INFO][4014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.671 [INFO][4014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.675 [INFO][4014] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.684 [INFO][4014] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.690 [INFO][4014] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.698 [INFO][4014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.698 [INFO][4014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.701 [INFO][4014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64 Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.704 [INFO][4014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.717 [INFO][4014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.202/26] block=192.168.109.192/26 handle="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.717 [INFO][4014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.202/26] handle="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" host="10.0.0.51" Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.717 [INFO][4014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:20.734175 containerd[1533]: 2025-10-13 00:01:20.717 [INFO][4014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.202/26] IPv6=[] ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" HandleID="k8s-pod-network.b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Workload="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" Oct 13 00:01:20.734712 containerd[1533]: 2025-10-13 00:01:20.719 [INFO][3998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b0881af2-f189-4951-9781-74f543a56e76", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 1, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.109.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:20.734712 containerd[1533]: 2025-10-13 00:01:20.719 [INFO][3998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.202/32] ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" Oct 13 00:01:20.734712 containerd[1533]: 2025-10-13 00:01:20.719 [INFO][3998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" Oct 13 00:01:20.734712 containerd[1533]: 2025-10-13 00:01:20.721 [INFO][3998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" Oct 13 00:01:20.734999 containerd[1533]: 2025-10-13 00:01:20.725 [INFO][3998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b0881af2-f189-4951-9781-74f543a56e76", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 1, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.109.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6a:1c:87:77:b6:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:20.734999 containerd[1533]: 2025-10-13 00:01:20.732 [INFO][3998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.51-k8s-nfs--server--provisioner--0-eth0" Oct 13 00:01:20.768218 kubelet[1864]: E1013 00:01:20.768151 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:20.841504 containerd[1533]: time="2025-10-13T00:01:20.840208071Z" level=info msg="connecting to shim b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64" address="unix:///run/containerd/s/ad5457de8f732fa2baec098fca8edca969570dad52e07ee43a96b01e65c7b886" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:20.869968 systemd[1]: Started cri-containerd-b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64.scope - libcontainer container b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64. Oct 13 00:01:20.882310 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:20.903207 containerd[1533]: time="2025-10-13T00:01:20.903150957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b0881af2-f189-4951-9781-74f543a56e76,Namespace:default,Attempt:0,} returns sandbox id \"b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64\"" Oct 13 00:01:21.126444 containerd[1533]: time="2025-10-13T00:01:21.126317990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:21.127160 containerd[1533]: time="2025-10-13T00:01:21.127130874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 00:01:21.127887 containerd[1533]: time="2025-10-13T00:01:21.127855477Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:21.130034 containerd[1533]: time="2025-10-13T00:01:21.129996208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:21.130502 containerd[1533]: time="2025-10-13T00:01:21.130477370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 2.660939797s" Oct 13 00:01:21.130557 containerd[1533]: time="2025-10-13T00:01:21.130503970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 00:01:21.132061 containerd[1533]: time="2025-10-13T00:01:21.131950297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 00:01:21.134674 containerd[1533]: time="2025-10-13T00:01:21.134635870Z" level=info msg="CreateContainer within sandbox \"6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 00:01:21.143832 containerd[1533]: time="2025-10-13T00:01:21.143429833Z" level=info msg="Container 522fb91d6575eaa52f8848a76f61b95a7d1d68eb1a7ca0dc4345e314f979a321: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:21.152110 containerd[1533]: time="2025-10-13T00:01:21.152049755Z" level=info msg="CreateContainer within sandbox \"6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"522fb91d6575eaa52f8848a76f61b95a7d1d68eb1a7ca0dc4345e314f979a321\"" Oct 13 00:01:21.152576 containerd[1533]: time="2025-10-13T00:01:21.152548437Z" level=info msg="StartContainer for \"522fb91d6575eaa52f8848a76f61b95a7d1d68eb1a7ca0dc4345e314f979a321\"" Oct 13 00:01:21.154201 containerd[1533]: time="2025-10-13T00:01:21.154175365Z" level=info msg="connecting to shim 522fb91d6575eaa52f8848a76f61b95a7d1d68eb1a7ca0dc4345e314f979a321" address="unix:///run/containerd/s/70ce6089643c02ef5b379a361df7fd2e064b68dba2584b7df5ba3a68f096a2ab" protocol=ttrpc version=3 Oct 13 00:01:21.180013 systemd[1]: Started cri-containerd-522fb91d6575eaa52f8848a76f61b95a7d1d68eb1a7ca0dc4345e314f979a321.scope - libcontainer container 522fb91d6575eaa52f8848a76f61b95a7d1d68eb1a7ca0dc4345e314f979a321. Oct 13 00:01:21.214104 containerd[1533]: time="2025-10-13T00:01:21.213979855Z" level=info msg="StartContainer for \"522fb91d6575eaa52f8848a76f61b95a7d1d68eb1a7ca0dc4345e314f979a321\" returns successfully" Oct 13 00:01:21.768594 kubelet[1864]: E1013 00:01:21.768538 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:21.903014 systemd-networkd[1435]: cali60e51b789ff: Gained IPv6LL Oct 13 00:01:22.769549 kubelet[1864]: E1013 00:01:22.769498 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:23.193603 containerd[1533]: time="2025-10-13T00:01:23.193547474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:23.194054 containerd[1533]: time="2025-10-13T00:01:23.194012116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 00:01:23.194940 containerd[1533]: time="2025-10-13T00:01:23.194902759Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:23.196975 containerd[1533]: time="2025-10-13T00:01:23.196947008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:23.197832 containerd[1533]: time="2025-10-13T00:01:23.197585291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.065423553s" Oct 13 00:01:23.197832 containerd[1533]: time="2025-10-13T00:01:23.197622211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 00:01:23.198900 containerd[1533]: time="2025-10-13T00:01:23.198853696Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 00:01:23.207095 containerd[1533]: time="2025-10-13T00:01:23.207059931Z" level=info msg="CreateContainer within sandbox \"d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 00:01:23.216386 containerd[1533]: time="2025-10-13T00:01:23.216338131Z" level=info msg="Container 84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:23.227225 containerd[1533]: time="2025-10-13T00:01:23.227172217Z" level=info msg="CreateContainer within sandbox \"d7074e03b60c798850a8211dde6841072fee8a9461bed1148f06b03368046542\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc\"" Oct 13 00:01:23.227854 containerd[1533]: time="2025-10-13T00:01:23.227693899Z" level=info msg="StartContainer for \"84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc\"" Oct 13 00:01:23.228715 containerd[1533]: time="2025-10-13T00:01:23.228661863Z" level=info msg="connecting to shim 84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc" address="unix:///run/containerd/s/ec9326948b78a2b7bdf0ac07190f341f13de15f38c682efcb8f2776023bae35f" protocol=ttrpc version=3 Oct 13 00:01:23.251982 systemd[1]: Started cri-containerd-84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc.scope - libcontainer container 84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc. Oct 13 00:01:23.285180 containerd[1533]: time="2025-10-13T00:01:23.285139744Z" level=info msg="StartContainer for \"84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc\" returns successfully" Oct 13 00:01:23.304893 containerd[1533]: time="2025-10-13T00:01:23.304846988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:23.305666 containerd[1533]: time="2025-10-13T00:01:23.305615071Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=0" Oct 13 00:01:23.309638 containerd[1533]: time="2025-10-13T00:01:23.309579768Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 110.687992ms" Oct 13 00:01:23.309976 containerd[1533]: time="2025-10-13T00:01:23.309839209Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 13 00:01:23.312710 containerd[1533]: time="2025-10-13T00:01:23.312670621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 00:01:23.315858 containerd[1533]: time="2025-10-13T00:01:23.315725234Z" level=info msg="CreateContainer within sandbox \"d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:01:23.323890 containerd[1533]: time="2025-10-13T00:01:23.323853389Z" level=info msg="Container 048388487a85080f43f82857b7a19c5731bc2042b9719a0da613b436a1cc8dce: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:23.330481 containerd[1533]: time="2025-10-13T00:01:23.330448617Z" level=info msg="CreateContainer within sandbox \"d8519961a9366467c5b8718be7f0082574d1ef771e68890cd12002e2e78b8f72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"048388487a85080f43f82857b7a19c5731bc2042b9719a0da613b436a1cc8dce\"" Oct 13 00:01:23.331234 containerd[1533]: time="2025-10-13T00:01:23.331206340Z" level=info msg="StartContainer for \"048388487a85080f43f82857b7a19c5731bc2042b9719a0da613b436a1cc8dce\"" Oct 13 00:01:23.332101 containerd[1533]: time="2025-10-13T00:01:23.332060264Z" level=info msg="connecting to shim 048388487a85080f43f82857b7a19c5731bc2042b9719a0da613b436a1cc8dce" address="unix:///run/containerd/s/ede98a3179029a2cc880aab02322edacda11b830e8a6aabe68dd5239ec6f67f3" protocol=ttrpc version=3 Oct 13 00:01:23.354018 systemd[1]: Started cri-containerd-048388487a85080f43f82857b7a19c5731bc2042b9719a0da613b436a1cc8dce.scope - libcontainer container 048388487a85080f43f82857b7a19c5731bc2042b9719a0da613b436a1cc8dce. Oct 13 00:01:23.383175 containerd[1533]: time="2025-10-13T00:01:23.383133001Z" level=info msg="StartContainer for \"048388487a85080f43f82857b7a19c5731bc2042b9719a0da613b436a1cc8dce\" returns successfully" Oct 13 00:01:23.770077 kubelet[1864]: E1013 00:01:23.770015 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:23.979068 kubelet[1864]: I1013 00:01:23.978994 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86f478b884-ghpv9" podStartSLOduration=43.822972933 podStartE2EDuration="51.978978779s" podCreationTimestamp="2025-10-13 00:00:32 +0000 UTC" firstStartedPulling="2025-10-13 00:01:15.042579569 +0000 UTC m=+24.346170513" lastFinishedPulling="2025-10-13 00:01:23.198585455 +0000 UTC m=+32.502176359" observedRunningTime="2025-10-13 00:01:23.978296096 +0000 UTC m=+33.281887080" watchObservedRunningTime="2025-10-13 00:01:23.978978779 +0000 UTC m=+33.282569723" Oct 13 00:01:23.993971 kubelet[1864]: I1013 00:01:23.993910 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pzdj6" podStartSLOduration=58.786446728 podStartE2EDuration="1m5.993892403s" podCreationTimestamp="2025-10-13 00:00:18 +0000 UTC" firstStartedPulling="2025-10-13 00:01:16.104648504 +0000 UTC m=+25.408239448" lastFinishedPulling="2025-10-13 00:01:23.312094219 +0000 UTC m=+32.615685123" observedRunningTime="2025-10-13 00:01:23.992352236 +0000 UTC m=+33.295943180" watchObservedRunningTime="2025-10-13 00:01:23.993892403 +0000 UTC m=+33.297483347" Oct 13 00:01:24.015723 containerd[1533]: time="2025-10-13T00:01:24.015650732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84bd89f67125fc56be17d23278a8acd7b59d3fdd7feaa22889343d4db92059cc\" id:\"48e2ad4eac353069c33728b8f8d754a3570f454e7dfc51f9dfc3c71f591ab006\" pid:4216 exited_at:{seconds:1760313684 nanos:10775152}" Oct 13 00:01:24.771110 kubelet[1864]: E1013 00:01:24.771061 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:24.940559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809353571.mount: Deactivated successfully. Oct 13 00:01:25.265847 containerd[1533]: time="2025-10-13T00:01:25.265771458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:25.266249 containerd[1533]: time="2025-10-13T00:01:25.266219779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 00:01:25.267198 containerd[1533]: time="2025-10-13T00:01:25.267166983Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:25.269320 containerd[1533]: time="2025-10-13T00:01:25.269271791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:25.270554 containerd[1533]: time="2025-10-13T00:01:25.270444555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.957691254s" Oct 13 00:01:25.270554 containerd[1533]: time="2025-10-13T00:01:25.270478235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 00:01:25.271827 containerd[1533]: time="2025-10-13T00:01:25.271739480Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Oct 13 00:01:25.273814 containerd[1533]: time="2025-10-13T00:01:25.273776048Z" level=info msg="CreateContainer within sandbox \"76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 00:01:25.281035 containerd[1533]: time="2025-10-13T00:01:25.280991835Z" level=info msg="Container 0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:25.291576 containerd[1533]: time="2025-10-13T00:01:25.291516794Z" level=info msg="CreateContainer within sandbox \"76c572c6238ab9d41cc2ebaa523d24bb890fb351fd56c999efedc63957ef834d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05\"" Oct 13 00:01:25.292028 containerd[1533]: time="2025-10-13T00:01:25.292008036Z" level=info msg="StartContainer for \"0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05\"" Oct 13 00:01:25.293820 containerd[1533]: time="2025-10-13T00:01:25.293581402Z" level=info msg="connecting to shim 0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05" address="unix:///run/containerd/s/60d7544a03dfac1e5ebba490367fd1e9eac5d99d5a207fda3d9b71b15cc9d058" protocol=ttrpc version=3 Oct 13 00:01:25.325071 systemd[1]: Started cri-containerd-0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05.scope - libcontainer container 0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05. Oct 13 00:01:25.361166 containerd[1533]: time="2025-10-13T00:01:25.361058535Z" level=info msg="StartContainer for \"0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05\" returns successfully" Oct 13 00:01:25.772261 kubelet[1864]: E1013 00:01:25.772088 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:26.092742 containerd[1533]: time="2025-10-13T00:01:26.092460651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05\" id:\"21aee2633f127cf2e8b7c215caf919c9f97b65838c55ebe4b50c18fbd5018f21\" pid:4296 exit_status:1 exited_at:{seconds:1760313686 nanos:91436168}" Oct 13 00:01:26.772535 kubelet[1864]: E1013 00:01:26.772465 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:26.976981 update_engine[1507]: I20251013 00:01:26.976920 1507 update_attempter.cc:509] Updating boot flags... Oct 13 00:01:26.993014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651341760.mount: Deactivated successfully. Oct 13 00:01:27.052505 containerd[1533]: time="2025-10-13T00:01:27.052022608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05\" id:\"8b00eda7a4817f851c63ca1a08be020deb02dedce2e958b8cad0195bd6b1f8cd\" pid:4328 exit_status:1 exited_at:{seconds:1760313687 nanos:51345886}" Oct 13 00:01:27.773537 kubelet[1864]: E1013 00:01:27.773490 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:28.043668 containerd[1533]: time="2025-10-13T00:01:28.043403741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ead8efc0411dc071f378202f57a93c7ea1f5f1e24e2824ba13a04275f22fc05\" id:\"ce257adf9504224c000dada1e6708383474af070e7e12242f0049b3c27205503\" pid:4369 exit_status:1 exited_at:{seconds:1760313688 nanos:43046980}" Oct 13 00:01:28.494818 containerd[1533]: time="2025-10-13T00:01:28.494754854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:28.495812 containerd[1533]: time="2025-10-13T00:01:28.495646977Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Oct 13 00:01:28.497104 containerd[1533]: time="2025-10-13T00:01:28.497066221Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:28.499499 containerd[1533]: time="2025-10-13T00:01:28.499466628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:28.501174 containerd[1533]: time="2025-10-13T00:01:28.501047073Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.229202313s" Oct 13 00:01:28.501174 containerd[1533]: time="2025-10-13T00:01:28.501082513Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Oct 13 00:01:28.512089 containerd[1533]: time="2025-10-13T00:01:28.512050467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 00:01:28.515377 containerd[1533]: time="2025-10-13T00:01:28.515337877Z" level=info msg="CreateContainer within sandbox \"b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Oct 13 00:01:28.526677 containerd[1533]: time="2025-10-13T00:01:28.526616832Z" level=info msg="Container 77285fb5eef3026220c979552b7a3f36f845171dedfea7dfc0a9016a40272fd3: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:28.542668 containerd[1533]: time="2025-10-13T00:01:28.542614721Z" level=info msg="CreateContainer within sandbox \"b68870a33cf583bf1db738f1510942d92218a64b254a09582467cca6fb998e64\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"77285fb5eef3026220c979552b7a3f36f845171dedfea7dfc0a9016a40272fd3\"" Oct 13 00:01:28.543499 containerd[1533]: time="2025-10-13T00:01:28.543211483Z" level=info msg="StartContainer for \"77285fb5eef3026220c979552b7a3f36f845171dedfea7dfc0a9016a40272fd3\"" Oct 13 00:01:28.544283 containerd[1533]: time="2025-10-13T00:01:28.544251446Z" level=info msg="connecting to shim 77285fb5eef3026220c979552b7a3f36f845171dedfea7dfc0a9016a40272fd3" address="unix:///run/containerd/s/ad5457de8f732fa2baec098fca8edca969570dad52e07ee43a96b01e65c7b886" protocol=ttrpc version=3 Oct 13 00:01:28.565467 systemd[1]: Started cri-containerd-77285fb5eef3026220c979552b7a3f36f845171dedfea7dfc0a9016a40272fd3.scope - libcontainer container 77285fb5eef3026220c979552b7a3f36f845171dedfea7dfc0a9016a40272fd3. Oct 13 00:01:28.599759 containerd[1533]: time="2025-10-13T00:01:28.599718018Z" level=info msg="StartContainer for \"77285fb5eef3026220c979552b7a3f36f845171dedfea7dfc0a9016a40272fd3\" returns successfully" Oct 13 00:01:28.774306 kubelet[1864]: E1013 00:01:28.774169 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:28.994507 kubelet[1864]: I1013 00:01:28.994433 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.387136852 podStartE2EDuration="8.994416915s" podCreationTimestamp="2025-10-13 00:01:20 +0000 UTC" firstStartedPulling="2025-10-13 00:01:20.904594804 +0000 UTC m=+30.208185748" lastFinishedPulling="2025-10-13 00:01:28.511874867 +0000 UTC m=+37.815465811" observedRunningTime="2025-10-13 00:01:28.993966154 +0000 UTC m=+38.297557098" watchObservedRunningTime="2025-10-13 00:01:28.994416915 +0000 UTC m=+38.298007859" Oct 13 00:01:28.994690 kubelet[1864]: I1013 00:01:28.994641 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-9nphb" podStartSLOduration=49.974951153 podStartE2EDuration="57.994636036s" podCreationTimestamp="2025-10-13 00:00:31 +0000 UTC" firstStartedPulling="2025-10-13 00:01:17.251558755 +0000 UTC m=+26.555149659" lastFinishedPulling="2025-10-13 00:01:25.271243598 +0000 UTC m=+34.574834542" observedRunningTime="2025-10-13 00:01:25.994212505 +0000 UTC m=+35.297803449" watchObservedRunningTime="2025-10-13 00:01:28.994636036 +0000 UTC m=+38.298227020" Oct 13 00:01:29.542168 containerd[1533]: time="2025-10-13T00:01:29.542122100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:29.542843 containerd[1533]: time="2025-10-13T00:01:29.542763542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 00:01:29.544116 containerd[1533]: time="2025-10-13T00:01:29.544077306Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:29.550367 containerd[1533]: time="2025-10-13T00:01:29.549960003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:29.551055 containerd[1533]: time="2025-10-13T00:01:29.551015526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.038921459s" Oct 13 00:01:29.551100 containerd[1533]: time="2025-10-13T00:01:29.551052846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 00:01:29.555049 containerd[1533]: time="2025-10-13T00:01:29.555016578Z" level=info msg="CreateContainer within sandbox \"6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 00:01:29.565159 containerd[1533]: time="2025-10-13T00:01:29.563960004Z" level=info msg="Container 365ffcf7052209224959bbac8a618e4527434b3cdd3ac3824f02e84c2afaca6c: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:29.573025 containerd[1533]: time="2025-10-13T00:01:29.572981670Z" level=info msg="CreateContainer within sandbox \"6146f2c0d19c32f5d13468056328ca7375b79aa5a47934b7c53fb5eeff553680\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"365ffcf7052209224959bbac8a618e4527434b3cdd3ac3824f02e84c2afaca6c\"" Oct 13 00:01:29.573710 containerd[1533]: time="2025-10-13T00:01:29.573686992Z" level=info msg="StartContainer for \"365ffcf7052209224959bbac8a618e4527434b3cdd3ac3824f02e84c2afaca6c\"" Oct 13 00:01:29.575111 containerd[1533]: time="2025-10-13T00:01:29.575088196Z" level=info msg="connecting to shim 365ffcf7052209224959bbac8a618e4527434b3cdd3ac3824f02e84c2afaca6c" address="unix:///run/containerd/s/70ce6089643c02ef5b379a361df7fd2e064b68dba2584b7df5ba3a68f096a2ab" protocol=ttrpc version=3 Oct 13 00:01:29.599127 systemd[1]: Started cri-containerd-365ffcf7052209224959bbac8a618e4527434b3cdd3ac3824f02e84c2afaca6c.scope - libcontainer container 365ffcf7052209224959bbac8a618e4527434b3cdd3ac3824f02e84c2afaca6c. Oct 13 00:01:29.636956 containerd[1533]: time="2025-10-13T00:01:29.636908455Z" level=info msg="StartContainer for \"365ffcf7052209224959bbac8a618e4527434b3cdd3ac3824f02e84c2afaca6c\" returns successfully" Oct 13 00:01:29.774712 kubelet[1864]: E1013 00:01:29.774656 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:29.869287 kubelet[1864]: I1013 00:01:29.869146 1864 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 00:01:29.872308 kubelet[1864]: I1013 00:01:29.872266 1864 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 00:01:30.775362 kubelet[1864]: E1013 00:01:30.775305 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:31.750552 kubelet[1864]: E1013 00:01:31.750503 1864 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:31.775911 kubelet[1864]: E1013 00:01:31.775866 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:32.776522 kubelet[1864]: E1013 00:01:32.776478 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:33.777233 kubelet[1864]: E1013 00:01:33.777185 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:34.131306 kubelet[1864]: I1013 00:01:34.131101 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8zhm2" podStartSLOduration=27.888189125 podStartE2EDuration="43.131083611s" podCreationTimestamp="2025-10-13 00:00:51 +0000 UTC" firstStartedPulling="2025-10-13 00:01:14.308830922 +0000 UTC m=+23.612421866" lastFinishedPulling="2025-10-13 00:01:29.551725408 +0000 UTC m=+38.855316352" observedRunningTime="2025-10-13 00:01:29.997949659 +0000 UTC m=+39.301540603" watchObservedRunningTime="2025-10-13 00:01:34.131083611 +0000 UTC m=+43.434674555" Oct 13 00:01:34.142724 systemd[1]: Created slice kubepods-besteffort-podd6ad2da9_d85d_4b79_9092_be5e0c4055c0.slice - libcontainer container kubepods-besteffort-podd6ad2da9_d85d_4b79_9092_be5e0c4055c0.slice. Oct 13 00:01:34.253212 kubelet[1864]: I1013 00:01:34.253136 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e345ba11-7f66-43d3-b9a2-78c690d1c4d1\" (UniqueName: \"kubernetes.io/nfs/d6ad2da9-d85d-4b79-9092-be5e0c4055c0-pvc-e345ba11-7f66-43d3-b9a2-78c690d1c4d1\") pod \"test-pod-1\" (UID: \"d6ad2da9-d85d-4b79-9092-be5e0c4055c0\") " pod="default/test-pod-1" Oct 13 00:01:34.253212 kubelet[1864]: I1013 00:01:34.253189 1864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gsc\" (UniqueName: \"kubernetes.io/projected/d6ad2da9-d85d-4b79-9092-be5e0c4055c0-kube-api-access-l2gsc\") pod \"test-pod-1\" (UID: \"d6ad2da9-d85d-4b79-9092-be5e0c4055c0\") " pod="default/test-pod-1" Oct 13 00:01:34.397989 kernel: netfs: FS-Cache loaded Oct 13 00:01:34.423165 kernel: RPC: Registered named UNIX socket transport module. Oct 13 00:01:34.423270 kernel: RPC: Registered udp transport module. Oct 13 00:01:34.423287 kernel: RPC: Registered tcp transport module. Oct 13 00:01:34.424499 kernel: RPC: Registered tcp-with-tls transport module. Oct 13 00:01:34.425388 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Oct 13 00:01:34.580778 kubelet[1864]: I1013 00:01:34.580577 1864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:01:34.604290 kernel: NFS: Registering the id_resolver key type Oct 13 00:01:34.604611 kernel: Key type id_resolver registered Oct 13 00:01:34.604649 kernel: Key type id_legacy registered Oct 13 00:01:34.634161 kubelet[1864]: I1013 00:01:34.634123 1864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:01:34.635295 nfsidmap[4541]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Oct 13 00:01:34.636222 nfsidmap[4541]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Oct 13 00:01:34.640037 nfsidmap[4542]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Oct 13 00:01:34.640195 nfsidmap[4542]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Oct 13 00:01:34.652933 nfsrahead[4545]: setting /var/lib/kubelet/pods/d6ad2da9-d85d-4b79-9092-be5e0c4055c0/volumes/kubernetes.io~nfs/pvc-e345ba11-7f66-43d3-b9a2-78c690d1c4d1 readahead to 128 Oct 13 00:01:34.657996 containerd[1533]: time="2025-10-13T00:01:34.657951235Z" level=info msg="StopContainer for \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" with timeout 30 (s)" Oct 13 00:01:34.658835 containerd[1533]: time="2025-10-13T00:01:34.658772957Z" level=info msg="Stop container \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" with signal terminated" Oct 13 00:01:34.672205 systemd[1]: cri-containerd-55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51.scope: Deactivated successfully. Oct 13 00:01:34.672495 systemd[1]: cri-containerd-55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51.scope: Consumed 1.051s CPU time, 43.9M memory peak, 4K read from disk. Oct 13 00:01:34.679102 containerd[1533]: time="2025-10-13T00:01:34.678994719Z" level=info msg="received exit event container_id:\"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" id:\"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" pid:3645 exit_status:1 exited_at:{seconds:1760313694 nanos:678646598}" Oct 13 00:01:34.679482 containerd[1533]: time="2025-10-13T00:01:34.679455360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" id:\"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" pid:3645 exit_status:1 exited_at:{seconds:1760313694 nanos:678646598}" Oct 13 00:01:34.700043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51-rootfs.mount: Deactivated successfully. Oct 13 00:01:34.778101 kubelet[1864]: E1013 00:01:34.778067 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:34.779451 containerd[1533]: time="2025-10-13T00:01:34.779417849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d6ad2da9-d85d-4b79-9092-be5e0c4055c0,Namespace:default,Attempt:0,}" Oct 13 00:01:34.784138 containerd[1533]: time="2025-10-13T00:01:34.783966379Z" level=info msg="StopContainer for \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" returns successfully" Oct 13 00:01:34.786686 containerd[1533]: time="2025-10-13T00:01:34.786615064Z" level=info msg="StopPodSandbox for \"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\"" Oct 13 00:01:34.794739 containerd[1533]: time="2025-10-13T00:01:34.794448841Z" level=info msg="Container to stop \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:01:34.801524 systemd[1]: cri-containerd-f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c.scope: Deactivated successfully. Oct 13 00:01:34.805503 containerd[1533]: time="2025-10-13T00:01:34.805471424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\" id:\"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\" pid:3181 exit_status:137 exited_at:{seconds:1760313694 nanos:805171903}" Oct 13 00:01:34.830448 containerd[1533]: time="2025-10-13T00:01:34.830378516Z" level=info msg="received exit event sandbox_id:\"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\" exit_status:137 exited_at:{seconds:1760313694 nanos:805171903}" Oct 13 00:01:34.832767 containerd[1533]: time="2025-10-13T00:01:34.832730361Z" level=info msg="shim disconnected" id=f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c namespace=k8s.io Oct 13 00:01:34.839299 containerd[1533]: time="2025-10-13T00:01:34.832763041Z" level=warning msg="cleaning up after shim disconnected" id=f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c namespace=k8s.io Oct 13 00:01:34.839299 containerd[1533]: time="2025-10-13T00:01:34.839287575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:01:34.891089 systemd-networkd[1435]: cali030f82bed86: Link DOWN Oct 13 00:01:34.891097 systemd-networkd[1435]: cali030f82bed86: Lost carrier Oct 13 00:01:34.923469 systemd-networkd[1435]: cali5ec59c6bf6e: Link UP Oct 13 00:01:34.923642 systemd-networkd[1435]: cali5ec59c6bf6e: Gained carrier Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.823 [INFO][4566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.51-k8s-test--pod--1-eth0 default d6ad2da9-d85d-4b79-9092-be5e0c4055c0 1337 0 2025-10-13 00:01:20 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.51 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.824 [INFO][4566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-eth0" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.852 [INFO][4606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" HandleID="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Workload="10.0.0.51-k8s-test--pod--1-eth0" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.852 [INFO][4606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" HandleID="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Workload="10.0.0.51-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b070), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.51", "pod":"test-pod-1", "timestamp":"2025-10-13 00:01:34.852568882 +0000 UTC"}, Hostname:"10.0.0.51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.852 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.852 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.852 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.51' Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.867 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.882 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.890 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.894 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.897 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.897 [INFO][4606] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.901 [INFO][4606] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.910 [INFO][4606] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.919 [INFO][4606] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.203/26] block=192.168.109.192/26 handle="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.919 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.203/26] handle="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" host="10.0.0.51" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.919 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.919 [INFO][4606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.203/26] IPv6=[] ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" HandleID="k8s-pod-network.3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Workload="10.0.0.51-k8s-test--pod--1-eth0" Oct 13 00:01:34.938145 containerd[1533]: 2025-10-13 00:01:34.921 [INFO][4566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d6ad2da9-d85d-4b79-9092-be5e0c4055c0", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 1, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.203/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:34.939603 containerd[1533]: 2025-10-13 00:01:34.921 [INFO][4566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.203/32] ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-eth0" Oct 13 00:01:34.939603 containerd[1533]: 2025-10-13 00:01:34.921 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-eth0" Oct 13 00:01:34.939603 containerd[1533]: 2025-10-13 00:01:34.923 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-eth0" Oct 13 00:01:34.939603 containerd[1533]: 2025-10-13 00:01:34.925 [INFO][4566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.51-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d6ad2da9-d85d-4b79-9092-be5e0c4055c0", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 1, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.51", ContainerID:"3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.203/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"36:f0:51:61:bf:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:34.939603 containerd[1533]: 2025-10-13 00:01:34.936 [INFO][4566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.51-k8s-test--pod--1-eth0" Oct 13 00:01:34.963400 containerd[1533]: time="2025-10-13T00:01:34.963337194Z" level=info msg="connecting to shim 3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d" address="unix:///run/containerd/s/0fcf2054153537a73ad628206a95cb35932ee8afc9547938e58ef4b655541c70" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.889 [INFO][4623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.889 [INFO][4623] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" iface="eth0" netns="/var/run/netns/cni-724bfd92-89b9-42e9-e095-778194bdd384" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.889 [INFO][4623] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" iface="eth0" netns="/var/run/netns/cni-724bfd92-89b9-42e9-e095-778194bdd384" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.898 [INFO][4623] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" after=9.4433ms iface="eth0" netns="/var/run/netns/cni-724bfd92-89b9-42e9-e095-778194bdd384" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.899 [INFO][4623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.899 [INFO][4623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.921 [INFO][4652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" HandleID="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.922 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.922 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.984 [INFO][4652] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" HandleID="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.984 [INFO][4652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" HandleID="k8s-pod-network.f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--gg2z6-eth0" Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.986 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:34.989008 containerd[1533]: 2025-10-13 00:01:34.987 [INFO][4623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c" Oct 13 00:01:34.989440 containerd[1533]: time="2025-10-13T00:01:34.989164729Z" level=info msg="TearDown network for sandbox \"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\" successfully" Oct 13 00:01:34.989440 containerd[1533]: time="2025-10-13T00:01:34.989196129Z" level=info msg="StopPodSandbox for \"f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c\" returns successfully" Oct 13 00:01:34.989048 systemd[1]: Started cri-containerd-3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d.scope - libcontainer container 3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d. Oct 13 00:01:35.008490 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:01:35.011455 kubelet[1864]: I1013 00:01:35.011425 1864 scope.go:117] "RemoveContainer" containerID="55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51" Oct 13 00:01:35.014045 containerd[1533]: time="2025-10-13T00:01:35.014007019Z" level=info msg="RemoveContainer for \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\"" Oct 13 00:01:35.033873 containerd[1533]: time="2025-10-13T00:01:35.033787738Z" level=info msg="RemoveContainer for \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" returns successfully" Oct 13 00:01:35.034227 containerd[1533]: time="2025-10-13T00:01:35.033948338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d6ad2da9-d85d-4b79-9092-be5e0c4055c0,Namespace:default,Attempt:0,} returns sandbox id \"3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d\"" Oct 13 00:01:35.034276 kubelet[1864]: I1013 00:01:35.034168 1864 scope.go:117] "RemoveContainer" containerID="55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51" Oct 13 00:01:35.034421 containerd[1533]: time="2025-10-13T00:01:35.034390259Z" level=error msg="ContainerStatus for \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\": not found" Oct 13 00:01:35.035064 containerd[1533]: time="2025-10-13T00:01:35.035036340Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Oct 13 00:01:35.038149 kubelet[1864]: E1013 00:01:35.038103 1864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\": not found" containerID="55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51" Oct 13 00:01:35.038238 kubelet[1864]: I1013 00:01:35.038159 1864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51"} err="failed to get container status \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\": rpc error: code = NotFound desc = an error occurred when try to find container \"55ed5be7bb101f926a36847d1efd126996e26689620b5c8f642b02f733cb1f51\": not found" Oct 13 00:01:35.158458 kubelet[1864]: I1013 00:01:35.158394 1864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0133bef-7fb4-4183-a390-03737bc1326e-calico-apiserver-certs\") pod \"c0133bef-7fb4-4183-a390-03737bc1326e\" (UID: \"c0133bef-7fb4-4183-a390-03737bc1326e\") " Oct 13 00:01:35.158859 kubelet[1864]: I1013 00:01:35.158595 1864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncxwf\" (UniqueName: \"kubernetes.io/projected/c0133bef-7fb4-4183-a390-03737bc1326e-kube-api-access-ncxwf\") pod \"c0133bef-7fb4-4183-a390-03737bc1326e\" (UID: \"c0133bef-7fb4-4183-a390-03737bc1326e\") " Oct 13 00:01:35.161273 kubelet[1864]: I1013 00:01:35.161233 1864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0133bef-7fb4-4183-a390-03737bc1326e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c0133bef-7fb4-4183-a390-03737bc1326e" (UID: "c0133bef-7fb4-4183-a390-03737bc1326e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:01:35.161625 kubelet[1864]: I1013 00:01:35.161581 1864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0133bef-7fb4-4183-a390-03737bc1326e-kube-api-access-ncxwf" (OuterVolumeSpecName: "kube-api-access-ncxwf") pod "c0133bef-7fb4-4183-a390-03737bc1326e" (UID: "c0133bef-7fb4-4183-a390-03737bc1326e"). InnerVolumeSpecName "kube-api-access-ncxwf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:01:35.259236 kubelet[1864]: I1013 00:01:35.259112 1864 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncxwf\" (UniqueName: \"kubernetes.io/projected/c0133bef-7fb4-4183-a390-03737bc1326e-kube-api-access-ncxwf\") on node \"10.0.0.51\" DevicePath \"\"" Oct 13 00:01:35.259236 kubelet[1864]: I1013 00:01:35.259146 1864 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0133bef-7fb4-4183-a390-03737bc1326e-calico-apiserver-certs\") on node \"10.0.0.51\" DevicePath \"\"" Oct 13 00:01:35.272834 containerd[1533]: time="2025-10-13T00:01:35.272134006Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:35.272834 containerd[1533]: time="2025-10-13T00:01:35.272630527Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Oct 13 00:01:35.276029 containerd[1533]: time="2025-10-13T00:01:35.275984773Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e1e3942d93b7c9e68a5e902395859d4f53de5aa9a187cba800c72cee6f9cb03f\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:0c4ba30a5f6a65d2bbdf93f2eff51d5304fd8c7f92cfc83a135a226aa2cd96af\", size \"70015565\" in 240.909073ms" Oct 13 00:01:35.276029 containerd[1533]: time="2025-10-13T00:01:35.276030293Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e1e3942d93b7c9e68a5e902395859d4f53de5aa9a187cba800c72cee6f9cb03f\"" Oct 13 00:01:35.280132 containerd[1533]: time="2025-10-13T00:01:35.280080421Z" level=info msg="CreateContainer within sandbox \"3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Oct 13 00:01:35.285643 containerd[1533]: time="2025-10-13T00:01:35.285577352Z" level=info msg="Container 08a4aa0bcf1ab269d2f3534a46f29262ada30dfadd84ec2e68aa8a2f5e9061fd: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:35.296596 containerd[1533]: time="2025-10-13T00:01:35.296545454Z" level=info msg="CreateContainer within sandbox \"3e34e0cb335838b4450049d1de1b1c44317f1e822e87842fcdf6678db7cc4d1d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"08a4aa0bcf1ab269d2f3534a46f29262ada30dfadd84ec2e68aa8a2f5e9061fd\"" Oct 13 00:01:35.297337 containerd[1533]: time="2025-10-13T00:01:35.297293255Z" level=info msg="StartContainer for \"08a4aa0bcf1ab269d2f3534a46f29262ada30dfadd84ec2e68aa8a2f5e9061fd\"" Oct 13 00:01:35.298379 containerd[1533]: time="2025-10-13T00:01:35.298352297Z" level=info msg="connecting to shim 08a4aa0bcf1ab269d2f3534a46f29262ada30dfadd84ec2e68aa8a2f5e9061fd" address="unix:///run/containerd/s/0fcf2054153537a73ad628206a95cb35932ee8afc9547938e58ef4b655541c70" protocol=ttrpc version=3 Oct 13 00:01:35.312273 systemd[1]: Removed slice kubepods-besteffort-podc0133bef_7fb4_4183_a390_03737bc1326e.slice - libcontainer container kubepods-besteffort-podc0133bef_7fb4_4183_a390_03737bc1326e.slice. Oct 13 00:01:35.312378 systemd[1]: kubepods-besteffort-podc0133bef_7fb4_4183_a390_03737bc1326e.slice: Consumed 1.070s CPU time, 44.2M memory peak, 4K read from disk. Oct 13 00:01:35.325335 systemd[1]: Started cri-containerd-08a4aa0bcf1ab269d2f3534a46f29262ada30dfadd84ec2e68aa8a2f5e9061fd.scope - libcontainer container 08a4aa0bcf1ab269d2f3534a46f29262ada30dfadd84ec2e68aa8a2f5e9061fd. Oct 13 00:01:35.352353 containerd[1533]: time="2025-10-13T00:01:35.352311683Z" level=info msg="StartContainer for \"08a4aa0bcf1ab269d2f3534a46f29262ada30dfadd84ec2e68aa8a2f5e9061fd\" returns successfully" Oct 13 00:01:35.369265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c-rootfs.mount: Deactivated successfully. Oct 13 00:01:35.369359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f76482087f5ce964580857764868ed72d0a0dc4d9d5421efe403f382682dbd5c-shm.mount: Deactivated successfully. Oct 13 00:01:35.369410 systemd[1]: run-netns-cni\x2d724bfd92\x2d89b9\x2d42e9\x2de095\x2d778194bdd384.mount: Deactivated successfully. Oct 13 00:01:35.369462 systemd[1]: var-lib-kubelet-pods-c0133bef\x2d7fb4\x2d4183\x2da390\x2d03737bc1326e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncxwf.mount: Deactivated successfully. Oct 13 00:01:35.369513 systemd[1]: var-lib-kubelet-pods-c0133bef\x2d7fb4\x2d4183\x2da390\x2d03737bc1326e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 00:01:35.778411 kubelet[1864]: E1013 00:01:35.778366 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:35.835167 kubelet[1864]: I1013 00:01:35.835129 1864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0133bef-7fb4-4183-a390-03737bc1326e" path="/var/lib/kubelet/pods/c0133bef-7fb4-4183-a390-03737bc1326e/volumes" Oct 13 00:01:36.037095 kubelet[1864]: I1013 00:01:36.036966 1864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.794964267 podStartE2EDuration="16.036948103s" podCreationTimestamp="2025-10-13 00:01:20 +0000 UTC" firstStartedPulling="2025-10-13 00:01:35.034661699 +0000 UTC m=+44.338252643" lastFinishedPulling="2025-10-13 00:01:35.276645535 +0000 UTC m=+44.580236479" observedRunningTime="2025-10-13 00:01:36.036732183 +0000 UTC m=+45.340323087" watchObservedRunningTime="2025-10-13 00:01:36.036948103 +0000 UTC m=+45.340539007" Oct 13 00:01:36.238938 systemd-networkd[1435]: cali5ec59c6bf6e: Gained IPv6LL Oct 13 00:01:36.778945 kubelet[1864]: E1013 00:01:36.778784 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:37.779801 kubelet[1864]: E1013 00:01:37.779740 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:38.536188 containerd[1533]: time="2025-10-13T00:01:38.536151590Z" level=info msg="StopContainer for \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" with timeout 30 (s)" Oct 13 00:01:38.536725 containerd[1533]: time="2025-10-13T00:01:38.536568631Z" level=info msg="Stop container \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" with signal terminated" Oct 13 00:01:38.559050 containerd[1533]: time="2025-10-13T00:01:38.558443626Z" level=info msg="received exit event container_id:\"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" id:\"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" pid:3829 exit_status:1 exited_at:{seconds:1760313698 nanos:558147866}" Oct 13 00:01:38.560538 containerd[1533]: time="2025-10-13T00:01:38.559206267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" id:\"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" pid:3829 exit_status:1 exited_at:{seconds:1760313698 nanos:558147866}" Oct 13 00:01:38.559334 systemd[1]: cri-containerd-3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0.scope: Deactivated successfully. Oct 13 00:01:38.559621 systemd[1]: cri-containerd-3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0.scope: Consumed 1.666s CPU time, 59.2M memory peak. Oct 13 00:01:38.590504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0-rootfs.mount: Deactivated successfully. Oct 13 00:01:38.615759 containerd[1533]: time="2025-10-13T00:01:38.615720999Z" level=info msg="StopContainer for \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" returns successfully" Oct 13 00:01:38.620185 containerd[1533]: time="2025-10-13T00:01:38.620140126Z" level=info msg="StopPodSandbox for \"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\"" Oct 13 00:01:38.620263 containerd[1533]: time="2025-10-13T00:01:38.620228166Z" level=info msg="Container to stop \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:01:38.626602 systemd[1]: cri-containerd-52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4.scope: Deactivated successfully. Oct 13 00:01:38.628072 containerd[1533]: time="2025-10-13T00:01:38.628047139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\" id:\"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\" pid:3758 exit_status:137 exited_at:{seconds:1760313698 nanos:627774978}" Oct 13 00:01:38.650221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4-rootfs.mount: Deactivated successfully. Oct 13 00:01:38.651071 containerd[1533]: time="2025-10-13T00:01:38.651033336Z" level=info msg="received exit event sandbox_id:\"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\" exit_status:137 exited_at:{seconds:1760313698 nanos:627774978}" Oct 13 00:01:38.651673 containerd[1533]: time="2025-10-13T00:01:38.651653777Z" level=info msg="shim disconnected" id=52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4 namespace=k8s.io Oct 13 00:01:38.651709 containerd[1533]: time="2025-10-13T00:01:38.651677377Z" level=warning msg="cleaning up after shim disconnected" id=52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4 namespace=k8s.io Oct 13 00:01:38.651709 containerd[1533]: time="2025-10-13T00:01:38.651705257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:01:38.654362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4-shm.mount: Deactivated successfully. Oct 13 00:01:38.701079 systemd-networkd[1435]: cali9ac0b1f2a6f: Link DOWN Oct 13 00:01:38.701084 systemd-networkd[1435]: cali9ac0b1f2a6f: Lost carrier Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.700 [INFO][4839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.700 [INFO][4839] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" iface="eth0" netns="/var/run/netns/cni-d00f4994-cc1e-e012-c7b4-87b1ca3fb692" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.700 [INFO][4839] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" iface="eth0" netns="/var/run/netns/cni-d00f4994-cc1e-e012-c7b4-87b1ca3fb692" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.710 [INFO][4839] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" after=9.684375ms iface="eth0" netns="/var/run/netns/cni-d00f4994-cc1e-e012-c7b4-87b1ca3fb692" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.710 [INFO][4839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.710 [INFO][4839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.731 [INFO][4854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" HandleID="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.731 [INFO][4854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.731 [INFO][4854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.765 [INFO][4854] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" HandleID="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.765 [INFO][4854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" HandleID="k8s-pod-network.52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Workload="10.0.0.51-k8s-calico--apiserver--5b686ccc97--q9cxb-eth0" Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.766 [INFO][4854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:38.770498 containerd[1533]: 2025-10-13 00:01:38.768 [INFO][4839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4" Oct 13 00:01:38.771090 containerd[1533]: time="2025-10-13T00:01:38.770926770Z" level=info msg="TearDown network for sandbox \"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\" successfully" Oct 13 00:01:38.771090 containerd[1533]: time="2025-10-13T00:01:38.770954170Z" level=info msg="StopPodSandbox for \"52a57c84069ad013a3576bfbaa30d7ba9156230253f88700d6e8707e7a09f0a4\" returns successfully" Oct 13 00:01:38.772690 systemd[1]: run-netns-cni\x2dd00f4994\x2dcc1e\x2de012\x2dc7b4\x2d87b1ca3fb692.mount: Deactivated successfully. Oct 13 00:01:38.780164 kubelet[1864]: E1013 00:01:38.780134 1864 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 13 00:01:38.883065 kubelet[1864]: I1013 00:01:38.882887 1864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abe04e5d-16c5-445a-94e4-1f8f8f789715-calico-apiserver-certs\") pod \"abe04e5d-16c5-445a-94e4-1f8f8f789715\" (UID: \"abe04e5d-16c5-445a-94e4-1f8f8f789715\") " Oct 13 00:01:38.883065 kubelet[1864]: I1013 00:01:38.882950 1864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jp5n\" (UniqueName: \"kubernetes.io/projected/abe04e5d-16c5-445a-94e4-1f8f8f789715-kube-api-access-4jp5n\") pod \"abe04e5d-16c5-445a-94e4-1f8f8f789715\" (UID: \"abe04e5d-16c5-445a-94e4-1f8f8f789715\") " Oct 13 00:01:38.887440 kubelet[1864]: I1013 00:01:38.887313 1864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe04e5d-16c5-445a-94e4-1f8f8f789715-kube-api-access-4jp5n" (OuterVolumeSpecName: "kube-api-access-4jp5n") pod "abe04e5d-16c5-445a-94e4-1f8f8f789715" (UID: "abe04e5d-16c5-445a-94e4-1f8f8f789715"). InnerVolumeSpecName "kube-api-access-4jp5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:01:38.887440 kubelet[1864]: I1013 00:01:38.887372 1864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe04e5d-16c5-445a-94e4-1f8f8f789715-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "abe04e5d-16c5-445a-94e4-1f8f8f789715" (UID: "abe04e5d-16c5-445a-94e4-1f8f8f789715"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:01:38.887824 systemd[1]: var-lib-kubelet-pods-abe04e5d\x2d16c5\x2d445a\x2d94e4\x2d1f8f8f789715-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jp5n.mount: Deactivated successfully. Oct 13 00:01:38.887922 systemd[1]: var-lib-kubelet-pods-abe04e5d\x2d16c5\x2d445a\x2d94e4\x2d1f8f8f789715-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 00:01:38.983972 kubelet[1864]: I1013 00:01:38.983912 1864 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abe04e5d-16c5-445a-94e4-1f8f8f789715-calico-apiserver-certs\") on node \"10.0.0.51\" DevicePath \"\"" Oct 13 00:01:38.983972 kubelet[1864]: I1013 00:01:38.983948 1864 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jp5n\" (UniqueName: \"kubernetes.io/projected/abe04e5d-16c5-445a-94e4-1f8f8f789715-kube-api-access-4jp5n\") on node \"10.0.0.51\" DevicePath \"\"" Oct 13 00:01:39.037510 kubelet[1864]: I1013 00:01:39.037481 1864 scope.go:117] "RemoveContainer" containerID="3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0" Oct 13 00:01:39.039517 containerd[1533]: time="2025-10-13T00:01:39.039479321Z" level=info msg="RemoveContainer for \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\"" Oct 13 00:01:39.041911 systemd[1]: Removed slice kubepods-besteffort-podabe04e5d_16c5_445a_94e4_1f8f8f789715.slice - libcontainer container kubepods-besteffort-podabe04e5d_16c5_445a_94e4_1f8f8f789715.slice. Oct 13 00:01:39.042016 systemd[1]: kubepods-besteffort-podabe04e5d_16c5_445a_94e4_1f8f8f789715.slice: Consumed 1.683s CPU time, 59.4M memory peak. Oct 13 00:01:39.048321 containerd[1533]: time="2025-10-13T00:01:39.048271414Z" level=info msg="RemoveContainer for \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" returns successfully" Oct 13 00:01:39.048538 kubelet[1864]: I1013 00:01:39.048514 1864 scope.go:117] "RemoveContainer" containerID="3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0" Oct 13 00:01:39.048873 containerd[1533]: time="2025-10-13T00:01:39.048759695Z" level=error msg="ContainerStatus for \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\": not found" Oct 13 00:01:39.048988 kubelet[1864]: E1013 00:01:39.048959 1864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\": not found" containerID="3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0" Oct 13 00:01:39.049036 kubelet[1864]: I1013 00:01:39.048993 1864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0"} err="failed to get container status \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f10e1baa3fab323ac1a5b69c69ab5c11d443f95471498cfc5f08381a739ead0\": not found"