Sep 12 23:55:09.874292 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:55:09.874319 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:55:09.874329 kernel: KASLR enabled Sep 12 23:55:09.874334 kernel: efi: EFI v2.7 by EDK II Sep 12 23:55:09.874340 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 12 23:55:09.874346 kernel: random: crng init done Sep 12 23:55:09.874353 kernel: ACPI: Early table checksum verification disabled Sep 12 23:55:09.874359 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 12 23:55:09.874366 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:55:09.874373 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874379 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874385 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874392 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874398 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874406 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874414 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874421 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874435 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:55:09.874442 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 23:55:09.874448 kernel: NUMA: Failed to initialise from firmware Sep 12 23:55:09.874455 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:55:09.874462 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 12 23:55:09.874468 kernel: Zone ranges: Sep 12 23:55:09.874475 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:55:09.874481 kernel: DMA32 empty Sep 12 23:55:09.874489 kernel: Normal empty Sep 12 23:55:09.874498 kernel: Movable zone start for each node Sep 12 23:55:09.874511 kernel: Early memory node ranges Sep 12 23:55:09.874518 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 12 23:55:09.874524 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 12 23:55:09.874538 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 12 23:55:09.874546 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 23:55:09.874553 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 23:55:09.874561 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 23:55:09.874567 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 23:55:09.874574 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:55:09.874580 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 23:55:09.874986 kernel: psci: probing for conduit method from ACPI. Sep 12 23:55:09.874996 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:55:09.875002 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:55:09.875013 kernel: psci: Trusted OS migration not required Sep 12 23:55:09.875019 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:55:09.875026 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 23:55:09.875035 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:55:09.875042 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:55:09.875049 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 23:55:09.875056 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:55:09.875063 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:55:09.875069 kernel: CPU features: detected: Hardware dirty bit management Sep 12 23:55:09.875076 kernel: CPU features: detected: Spectre-v4 Sep 12 23:55:09.875083 kernel: CPU features: detected: Spectre-BHB Sep 12 23:55:09.875090 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:55:09.875097 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:55:09.875105 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:55:09.875112 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:55:09.875119 kernel: alternatives: applying boot alternatives Sep 12 23:55:09.875127 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:55:09.875135 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:55:09.875146 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:55:09.875153 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:55:09.875160 kernel: Fallback order for Node 0: 0 Sep 12 23:55:09.875168 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 12 23:55:09.875175 kernel: Policy zone: DMA Sep 12 23:55:09.875181 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:55:09.875190 kernel: software IO TLB: area num 4. Sep 12 23:55:09.875197 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 12 23:55:09.875205 kernel: Memory: 2386336K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185952K reserved, 0K cma-reserved) Sep 12 23:55:09.875212 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:55:09.875219 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:55:09.875226 kernel: rcu: RCU event tracing is enabled. Sep 12 23:55:09.875233 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:55:09.875240 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:55:09.875247 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:55:09.875253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:55:09.875260 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:55:09.875268 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:55:09.875275 kernel: GICv3: 256 SPIs implemented Sep 12 23:55:09.875282 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:55:09.875289 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:55:09.875295 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:55:09.875302 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 23:55:09.875309 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 23:55:09.875316 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:55:09.875323 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:55:09.875330 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 12 23:55:09.875337 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 12 23:55:09.875344 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:55:09.875352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:55:09.875359 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:55:09.875366 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:55:09.875373 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:55:09.875379 kernel: arm-pv: using stolen time PV Sep 12 23:55:09.875386 kernel: Console: colour dummy device 80x25 Sep 12 23:55:09.875393 kernel: ACPI: Core revision 20230628 Sep 12 23:55:09.875401 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:55:09.875408 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:55:09.875415 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:55:09.875423 kernel: landlock: Up and running. Sep 12 23:55:09.875430 kernel: SELinux: Initializing. Sep 12 23:55:09.875437 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:55:09.875444 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:55:09.875451 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:55:09.875458 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:55:09.875470 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:55:09.875478 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:55:09.875487 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 23:55:09.875499 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 23:55:09.875508 kernel: Remapping and enabling EFI services. Sep 12 23:55:09.875515 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:55:09.875522 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:55:09.875529 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 23:55:09.876119 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 12 23:55:09.876135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:55:09.876143 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:55:09.876151 kernel: Detected PIPT I-cache on CPU2 Sep 12 23:55:09.876158 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 23:55:09.876174 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 12 23:55:09.876181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:55:09.876194 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 23:55:09.876203 kernel: Detected PIPT I-cache on CPU3 Sep 12 23:55:09.876210 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 23:55:09.876218 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 12 23:55:09.876225 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:55:09.876232 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 23:55:09.876239 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:55:09.876248 kernel: SMP: Total of 4 processors activated. Sep 12 23:55:09.876256 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:55:09.876263 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:55:09.876271 kernel: CPU features: detected: Common not Private translations Sep 12 23:55:09.876278 kernel: CPU features: detected: CRC32 instructions Sep 12 23:55:09.876285 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 23:55:09.876293 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:55:09.876300 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:55:09.876316 kernel: CPU features: detected: Privileged Access Never Sep 12 23:55:09.876326 kernel: CPU features: detected: RAS Extension Support Sep 12 23:55:09.876333 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 23:55:09.876344 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:55:09.876352 kernel: alternatives: applying system-wide alternatives Sep 12 23:55:09.876359 kernel: devtmpfs: initialized Sep 12 23:55:09.876367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:55:09.876374 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:55:09.876382 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:55:09.876391 kernel: SMBIOS 3.0.0 present. Sep 12 23:55:09.876398 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 12 23:55:09.876406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:55:09.876413 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:55:09.876421 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:55:09.876428 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:55:09.876436 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:55:09.876443 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 12 23:55:09.876450 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:55:09.876459 kernel: cpuidle: using governor menu Sep 12 23:55:09.876467 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:55:09.876474 kernel: ASID allocator initialised with 32768 entries Sep 12 23:55:09.876481 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:55:09.876489 kernel: Serial: AMBA PL011 UART driver Sep 12 23:55:09.876496 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:55:09.876503 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 23:55:09.876511 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:55:09.876518 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:55:09.876527 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:55:09.876542 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:55:09.876550 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:55:09.876557 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:55:09.876565 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:55:09.876572 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:55:09.876580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:55:09.876587 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:55:09.876594 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:55:09.876604 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:55:09.876611 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:55:09.876618 kernel: ACPI: Interpreter enabled Sep 12 23:55:09.876626 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:55:09.876633 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:55:09.876641 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:55:09.876648 kernel: printk: console [ttyAMA0] enabled Sep 12 23:55:09.876656 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:55:09.876823 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:55:09.876904 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:55:09.876972 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:55:09.877035 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 23:55:09.877099 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 23:55:09.877109 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 23:55:09.877116 kernel: PCI host bridge to bus 0000:00 Sep 12 23:55:09.877186 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 23:55:09.877249 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:55:09.877307 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 23:55:09.877365 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:55:09.877443 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 23:55:09.877518 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 23:55:09.877597 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 12 23:55:09.877668 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 12 23:55:09.877733 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:55:09.877810 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:55:09.877877 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 12 23:55:09.877943 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 12 23:55:09.878002 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 23:55:09.878060 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:55:09.878122 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 23:55:09.878132 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:55:09.878139 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:55:09.878147 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:55:09.878154 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:55:09.878162 kernel: iommu: Default domain type: Translated Sep 12 23:55:09.878169 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:55:09.878177 kernel: efivars: Registered efivars operations Sep 12 23:55:09.878184 kernel: vgaarb: loaded Sep 12 23:55:09.878194 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:55:09.878201 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:55:09.878209 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:55:09.878216 kernel: pnp: PnP ACPI init Sep 12 23:55:09.878292 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 23:55:09.878303 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:55:09.878310 kernel: NET: Registered PF_INET protocol family Sep 12 23:55:09.878318 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:55:09.878328 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:55:09.878335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:55:09.878343 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:55:09.878350 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:55:09.878358 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:55:09.878366 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:55:09.878373 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:55:09.878381 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:55:09.878388 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:55:09.878397 kernel: kvm [1]: HYP mode not available Sep 12 23:55:09.878405 kernel: Initialise system trusted keyrings Sep 12 23:55:09.878413 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:55:09.878420 kernel: Key type asymmetric registered Sep 12 23:55:09.878427 kernel: Asymmetric key parser 'x509' registered Sep 12 23:55:09.878435 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:55:09.878442 kernel: io scheduler mq-deadline registered Sep 12 23:55:09.878450 kernel: io scheduler kyber registered Sep 12 23:55:09.878457 kernel: io scheduler bfq registered Sep 12 23:55:09.878467 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:55:09.878475 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:55:09.878482 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:55:09.878559 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 23:55:09.878569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:55:09.878577 kernel: thunder_xcv, ver 1.0 Sep 12 23:55:09.878585 kernel: thunder_bgx, ver 1.0 Sep 12 23:55:09.878592 kernel: nicpf, ver 1.0 Sep 12 23:55:09.878599 kernel: nicvf, ver 1.0 Sep 12 23:55:09.878674 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:55:09.878737 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:55:09 UTC (1757721309) Sep 12 23:55:09.878747 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:55:09.878755 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 23:55:09.878807 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:55:09.878814 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:55:09.878822 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:55:09.878829 kernel: Segment Routing with IPv6 Sep 12 23:55:09.878839 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:55:09.878847 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:55:09.878854 kernel: Key type dns_resolver registered Sep 12 23:55:09.878861 kernel: registered taskstats version 1 Sep 12 23:55:09.878869 kernel: Loading compiled-in X.509 certificates Sep 12 23:55:09.878876 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:55:09.878883 kernel: Key type .fscrypt registered Sep 12 23:55:09.878891 kernel: Key type fscrypt-provisioning registered Sep 12 23:55:09.878898 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:55:09.878907 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:55:09.878914 kernel: ima: No architecture policies found Sep 12 23:55:09.878922 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:55:09.878929 kernel: clk: Disabling unused clocks Sep 12 23:55:09.878936 kernel: Freeing unused kernel memory: 39488K Sep 12 23:55:09.878943 kernel: Run /init as init process Sep 12 23:55:09.878950 kernel: with arguments: Sep 12 23:55:09.878958 kernel: /init Sep 12 23:55:09.878965 kernel: with environment: Sep 12 23:55:09.878973 kernel: HOME=/ Sep 12 23:55:09.878980 kernel: TERM=linux Sep 12 23:55:09.878987 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:55:09.878997 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:55:09.879007 systemd[1]: Detected virtualization kvm. Sep 12 23:55:09.879016 systemd[1]: Detected architecture arm64. Sep 12 23:55:09.879023 systemd[1]: Running in initrd. Sep 12 23:55:09.879033 systemd[1]: No hostname configured, using default hostname. Sep 12 23:55:09.879041 systemd[1]: Hostname set to . Sep 12 23:55:09.879050 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:55:09.879058 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:55:09.879066 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:55:09.879074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:55:09.879083 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:55:09.879091 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:55:09.879101 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:55:09.879110 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:55:09.879119 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:55:09.879128 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:55:09.879136 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:55:09.879144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:55:09.879152 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:55:09.879162 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:55:09.879170 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:55:09.879177 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:55:09.879185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:55:09.879193 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:55:09.879202 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:55:09.879209 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:55:09.879218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:55:09.879226 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:55:09.879236 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:55:09.879244 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:55:09.879252 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:55:09.879259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:55:09.879267 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:55:09.879275 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:55:09.879283 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:55:09.879291 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:55:09.879301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:55:09.879309 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:55:09.879317 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:55:09.879325 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:55:09.879353 systemd-journald[238]: Collecting audit messages is disabled. Sep 12 23:55:09.879375 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:55:09.879385 systemd-journald[238]: Journal started Sep 12 23:55:09.879405 systemd-journald[238]: Runtime Journal (/run/log/journal/4baf5d8a182a4e109382ae75deff1a48) is 5.9M, max 47.3M, 41.4M free. Sep 12 23:55:09.874478 systemd-modules-load[239]: Inserted module 'overlay' Sep 12 23:55:09.882442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:55:09.884344 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:55:09.884801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:55:09.889134 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:55:09.889156 kernel: Bridge firewalling registered Sep 12 23:55:09.888244 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 12 23:55:09.889849 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:55:09.901930 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:55:09.903754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:55:09.906303 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:55:09.909132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:55:09.917122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:55:09.919114 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:55:09.922680 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:55:09.925457 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:55:09.938011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:55:09.940404 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:55:09.950124 dracut-cmdline[276]: dracut-dracut-053 Sep 12 23:55:09.952944 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:55:09.968070 systemd-resolved[279]: Positive Trust Anchors: Sep 12 23:55:09.968090 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:55:09.968125 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:55:09.972921 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 12 23:55:09.977123 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:55:09.981450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:55:10.024846 kernel: SCSI subsystem initialized Sep 12 23:55:10.028784 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:55:10.036793 kernel: iscsi: registered transport (tcp) Sep 12 23:55:10.049788 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:55:10.049835 kernel: QLogic iSCSI HBA Driver Sep 12 23:55:10.092923 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:55:10.100974 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:55:10.118199 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:55:10.118259 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:55:10.118271 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:55:10.178795 kernel: raid6: neonx8 gen() 14595 MB/s Sep 12 23:55:10.195775 kernel: raid6: neonx4 gen() 15115 MB/s Sep 12 23:55:10.212783 kernel: raid6: neonx2 gen() 9465 MB/s Sep 12 23:55:10.229787 kernel: raid6: neonx1 gen() 10185 MB/s Sep 12 23:55:10.246792 kernel: raid6: int64x8 gen() 6851 MB/s Sep 12 23:55:10.263781 kernel: raid6: int64x4 gen() 6802 MB/s Sep 12 23:55:10.280781 kernel: raid6: int64x2 gen() 6043 MB/s Sep 12 23:55:10.298288 kernel: raid6: int64x1 gen() 4964 MB/s Sep 12 23:55:10.298340 kernel: raid6: using algorithm neonx4 gen() 15115 MB/s Sep 12 23:55:10.314814 kernel: raid6: .... xor() 11981 MB/s, rmw enabled Sep 12 23:55:10.314872 kernel: raid6: using neon recovery algorithm Sep 12 23:55:10.320031 kernel: xor: measuring software checksum speed Sep 12 23:55:10.320071 kernel: 8regs : 19254 MB/sec Sep 12 23:55:10.321122 kernel: 32regs : 19631 MB/sec Sep 12 23:55:10.321135 kernel: arm64_neon : 26927 MB/sec Sep 12 23:55:10.321144 kernel: xor: using function: arm64_neon (26927 MB/sec) Sep 12 23:55:10.370790 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:55:10.382944 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:55:10.397927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:55:10.414368 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 12 23:55:10.419703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:55:10.428710 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:55:10.448632 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 12 23:55:10.484344 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:55:10.496255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:55:10.547525 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:55:10.558751 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:55:10.571625 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:55:10.573172 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:55:10.575834 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:55:10.577511 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:55:10.590968 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:55:10.602059 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:55:10.605778 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 23:55:10.607777 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:55:10.611090 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:55:10.611125 kernel: GPT:9289727 != 19775487 Sep 12 23:55:10.611135 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:55:10.611144 kernel: GPT:9289727 != 19775487 Sep 12 23:55:10.611158 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:55:10.612489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:55:10.614290 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:55:10.614400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:55:10.616854 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:55:10.617821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:55:10.618007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:55:10.620034 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:55:10.628989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:55:10.641259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:55:10.645322 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (522) Sep 12 23:55:10.647778 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (528) Sep 12 23:55:10.648485 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:55:10.659491 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:55:10.663943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:55:10.668059 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:55:10.668996 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:55:10.683960 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:55:10.687085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:55:10.693690 disk-uuid[556]: Primary Header is updated. Sep 12 23:55:10.693690 disk-uuid[556]: Secondary Entries is updated. Sep 12 23:55:10.693690 disk-uuid[556]: Secondary Header is updated. Sep 12 23:55:10.702790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:55:10.708787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:55:10.709942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:55:11.718780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:55:11.718888 disk-uuid[557]: The operation has completed successfully. Sep 12 23:55:11.751721 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:55:11.751829 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:55:11.768958 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:55:11.772671 sh[581]: Success Sep 12 23:55:11.782880 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:55:11.815682 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:55:11.827131 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:55:11.830881 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:55:11.842448 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:55:11.842497 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:55:11.846500 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:55:11.846551 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:55:11.847113 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:55:11.861718 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:55:11.862578 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:55:11.869929 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:55:11.873951 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:55:11.879006 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:55:11.879048 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:55:11.879102 kernel: BTRFS info (device vda6): using free space tree Sep 12 23:55:11.881812 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 23:55:11.889512 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:55:11.891714 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:55:11.898819 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:55:11.906909 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:55:11.965023 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:55:11.976938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:55:11.979110 ignition[677]: Ignition 2.19.0 Sep 12 23:55:11.979118 ignition[677]: Stage: fetch-offline Sep 12 23:55:11.979156 ignition[677]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:55:11.979164 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:55:11.979302 ignition[677]: parsed url from cmdline: "" Sep 12 23:55:11.979305 ignition[677]: no config URL provided Sep 12 23:55:11.979310 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:55:11.979316 ignition[677]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:55:11.979337 ignition[677]: op(1): [started] loading QEMU firmware config module Sep 12 23:55:11.979341 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:55:11.988770 ignition[677]: op(1): [finished] loading QEMU firmware config module Sep 12 23:55:12.004636 systemd-networkd[771]: lo: Link UP Sep 12 23:55:12.004646 systemd-networkd[771]: lo: Gained carrier Sep 12 23:55:12.005421 systemd-networkd[771]: Enumeration completed Sep 12 23:55:12.005508 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:55:12.005886 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:55:12.005889 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:55:12.006831 systemd-networkd[771]: eth0: Link UP Sep 12 23:55:12.006835 systemd-networkd[771]: eth0: Gained carrier Sep 12 23:55:12.006841 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:55:12.007590 systemd[1]: Reached target network.target - Network. Sep 12 23:55:12.030831 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:55:12.044482 ignition[677]: parsing config with SHA512: 839b4a5b26b5f3051d7203842e258ffc61b8ba3efe4c73c226f1df6ca8aede1b6d24cd24798e4dbcfa2bfcc83091b081cd7ed4c88e99a1ce970a38170a639a3c Sep 12 23:55:12.048512 unknown[677]: fetched base config from "system" Sep 12 23:55:12.048521 unknown[677]: fetched user config from "qemu" Sep 12 23:55:12.050610 ignition[677]: fetch-offline: fetch-offline passed Sep 12 23:55:12.050689 ignition[677]: Ignition finished successfully Sep 12 23:55:12.053267 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:55:12.054539 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:55:12.058931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:55:12.069849 ignition[777]: Ignition 2.19.0 Sep 12 23:55:12.069858 ignition[777]: Stage: kargs Sep 12 23:55:12.070021 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:55:12.070030 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:55:12.073279 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:55:12.070914 ignition[777]: kargs: kargs passed Sep 12 23:55:12.070957 ignition[777]: Ignition finished successfully Sep 12 23:55:12.081967 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:55:12.091599 ignition[786]: Ignition 2.19.0 Sep 12 23:55:12.091611 ignition[786]: Stage: disks Sep 12 23:55:12.091843 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:55:12.091853 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:55:12.092707 ignition[786]: disks: disks passed Sep 12 23:55:12.092753 ignition[786]: Ignition finished successfully Sep 12 23:55:12.095835 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:55:12.096986 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:55:12.098422 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:55:12.100186 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:55:12.101706 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:55:12.103091 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:55:12.114941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:55:12.126916 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 23:55:12.130843 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:55:12.152917 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:55:12.196561 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:55:12.197754 kernel: EXT4-fs (vda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:55:12.197655 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:55:12.209842 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:55:12.213521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:55:12.215678 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:55:12.215728 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:55:12.215749 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:55:12.222633 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Sep 12 23:55:12.219941 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:55:12.226348 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:55:12.226368 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:55:12.226378 kernel: BTRFS info (device vda6): using free space tree Sep 12 23:55:12.222541 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:55:12.229791 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 23:55:12.231106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:55:12.261037 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:55:12.264255 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:55:12.268502 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:55:12.272016 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:55:12.335902 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:55:12.342912 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:55:12.344217 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:55:12.349814 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:55:12.364183 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:55:12.367255 ignition[917]: INFO : Ignition 2.19.0 Sep 12 23:55:12.367255 ignition[917]: INFO : Stage: mount Sep 12 23:55:12.368501 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:55:12.368501 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:55:12.368501 ignition[917]: INFO : mount: mount passed Sep 12 23:55:12.368501 ignition[917]: INFO : Ignition finished successfully Sep 12 23:55:12.370979 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:55:12.383874 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:55:12.840823 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:55:12.858964 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:55:12.866784 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (932) Sep 12 23:55:12.868992 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:55:12.869020 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:55:12.869031 kernel: BTRFS info (device vda6): using free space tree Sep 12 23:55:12.871772 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 23:55:12.872861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:55:12.888663 ignition[949]: INFO : Ignition 2.19.0 Sep 12 23:55:12.888663 ignition[949]: INFO : Stage: files Sep 12 23:55:12.890397 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:55:12.890397 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:55:12.890397 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:55:12.893454 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:55:12.893454 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:55:12.893454 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:55:12.893454 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:55:12.893454 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:55:12.893444 unknown[949]: wrote ssh authorized keys file for user: core Sep 12 23:55:12.899709 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 23:55:12.899709 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 23:55:12.937131 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:55:13.154008 systemd-networkd[771]: eth0: Gained IPv6LL Sep 12 23:55:13.240865 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:55:13.242578 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 23:55:13.791504 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:55:14.291009 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:55:14.291009 ignition[949]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 23:55:14.294125 ignition[949]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:55:14.311583 ignition[949]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:55:14.315558 ignition[949]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:55:14.317939 ignition[949]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:55:14.317939 ignition[949]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:55:14.317939 ignition[949]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:55:14.317939 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:55:14.317939 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:55:14.317939 ignition[949]: INFO : files: files passed Sep 12 23:55:14.317939 ignition[949]: INFO : Ignition finished successfully Sep 12 23:55:14.318198 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:55:14.326910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:55:14.328511 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:55:14.330769 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:55:14.330843 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:55:14.336890 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:55:14.339280 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:55:14.339280 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:55:14.342057 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:55:14.341541 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:55:14.343400 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:55:14.353219 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:55:14.372064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:55:14.372173 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:55:14.373937 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:55:14.375436 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:55:14.376898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:55:14.377661 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:55:14.392867 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:55:14.395026 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:55:14.406622 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:55:14.407669 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:55:14.409298 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:55:14.410653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:55:14.410775 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:55:14.412800 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:55:14.414449 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:55:14.415682 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:55:14.417023 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:55:14.418535 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:55:14.420038 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:55:14.421454 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:55:14.422947 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:55:14.424539 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:55:14.425857 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:55:14.426994 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:55:14.427118 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:55:14.428906 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:55:14.430375 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:55:14.431839 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:55:14.432885 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:55:14.434160 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:55:14.434271 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:55:14.436394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:55:14.436512 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:55:14.437996 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:55:14.439173 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:55:14.443844 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:55:14.444820 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:55:14.446526 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:55:14.447810 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:55:14.447897 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:55:14.449153 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:55:14.449233 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:55:14.450460 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:55:14.450573 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:55:14.451920 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:55:14.452017 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:55:14.463940 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:55:14.465423 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:55:14.466226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:55:14.466337 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:55:14.467687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:55:14.467879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:55:14.472372 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:55:14.473350 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:55:14.476501 ignition[1003]: INFO : Ignition 2.19.0 Sep 12 23:55:14.476501 ignition[1003]: INFO : Stage: umount Sep 12 23:55:14.478717 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:55:14.478717 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:55:14.478717 ignition[1003]: INFO : umount: umount passed Sep 12 23:55:14.478717 ignition[1003]: INFO : Ignition finished successfully Sep 12 23:55:14.479299 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:55:14.479402 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:55:14.481144 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:55:14.481510 systemd[1]: Stopped target network.target - Network. Sep 12 23:55:14.482240 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:55:14.482289 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:55:14.483690 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:55:14.483727 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:55:14.485213 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:55:14.485252 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:55:14.486536 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:55:14.486576 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:55:14.488077 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:55:14.489421 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:55:14.492911 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:55:14.493020 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:55:14.494877 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:55:14.494929 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:55:14.496932 systemd-networkd[771]: eth0: DHCPv6 lease lost Sep 12 23:55:14.498437 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:55:14.498560 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:55:14.500017 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:55:14.500046 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:55:14.511910 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:55:14.512660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:55:14.512722 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:55:14.514462 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:55:14.514504 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:55:14.515831 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:55:14.515869 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:55:14.517671 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:55:14.527221 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:55:14.527325 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:55:14.537613 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:55:14.537774 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:55:14.539788 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:55:14.539829 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:55:14.541371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:55:14.541409 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:55:14.543104 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:55:14.543151 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:55:14.545320 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:55:14.545362 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:55:14.547464 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:55:14.547509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:55:14.562922 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:55:14.563725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:55:14.563804 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:55:14.565609 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:55:14.565648 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:55:14.567271 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:55:14.567308 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:55:14.569083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:55:14.569120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:55:14.570987 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:55:14.571067 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:55:14.572609 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:55:14.572684 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:55:14.574692 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:55:14.575639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:55:14.575699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:55:14.577970 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:55:14.587314 systemd[1]: Switching root. Sep 12 23:55:14.616596 systemd-journald[238]: Journal stopped Sep 12 23:55:15.259072 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 12 23:55:15.259133 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:55:15.259146 kernel: SELinux: policy capability open_perms=1 Sep 12 23:55:15.259155 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:55:15.259165 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:55:15.259175 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:55:15.259186 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:55:15.259196 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:55:15.259211 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:55:15.259222 kernel: audit: type=1403 audit(1757721314.748:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:55:15.259233 systemd[1]: Successfully loaded SELinux policy in 29.628ms. Sep 12 23:55:15.259254 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.298ms. Sep 12 23:55:15.259267 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:55:15.259278 systemd[1]: Detected virtualization kvm. Sep 12 23:55:15.259289 systemd[1]: Detected architecture arm64. Sep 12 23:55:15.259300 systemd[1]: Detected first boot. Sep 12 23:55:15.259313 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:55:15.259324 zram_generator::config[1047]: No configuration found. Sep 12 23:55:15.259341 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:55:15.259352 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:55:15.259363 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:55:15.259375 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:55:15.259386 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:55:15.259398 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:55:15.259411 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:55:15.259422 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:55:15.259433 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:55:15.259445 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:55:15.259456 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:55:15.259468 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:55:15.259479 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:55:15.259490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:55:15.259501 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:55:15.259514 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:55:15.259535 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:55:15.259547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:55:15.259559 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:55:15.259570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:55:15.259581 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:55:15.259591 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:55:15.259603 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:55:15.259616 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:55:15.259627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:55:15.259638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:55:15.259649 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:55:15.259660 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:55:15.259671 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:55:15.259683 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:55:15.259694 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:55:15.259706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:55:15.259720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:55:15.259731 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:55:15.259743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:55:15.259754 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:55:15.259812 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:55:15.259825 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:55:15.259836 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:55:15.259847 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:55:15.259859 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:55:15.259873 systemd[1]: Reached target machines.target - Containers. Sep 12 23:55:15.259889 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:55:15.259901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:15.259912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:55:15.259923 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:55:15.259934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:55:15.259945 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:55:15.259956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:55:15.259969 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:55:15.259981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:55:15.259992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:55:15.260003 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:55:15.260015 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:55:15.260026 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:55:15.260037 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:55:15.260048 kernel: loop: module loaded Sep 12 23:55:15.260059 kernel: fuse: init (API version 7.39) Sep 12 23:55:15.260071 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:55:15.260083 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:55:15.260095 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:55:15.260105 kernel: ACPI: bus type drm_connector registered Sep 12 23:55:15.260133 systemd-journald[1114]: Collecting audit messages is disabled. Sep 12 23:55:15.260156 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:55:15.260168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:55:15.260179 systemd-journald[1114]: Journal started Sep 12 23:55:15.260203 systemd-journald[1114]: Runtime Journal (/run/log/journal/4baf5d8a182a4e109382ae75deff1a48) is 5.9M, max 47.3M, 41.4M free. Sep 12 23:55:15.085451 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:55:15.101695 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:55:15.102051 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:55:15.263233 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:55:15.263263 systemd[1]: Stopped verity-setup.service. Sep 12 23:55:15.266081 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:55:15.266678 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:55:15.267728 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:55:15.268782 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:55:15.269584 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:55:15.270585 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:55:15.271581 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:55:15.272581 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:55:15.273816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:55:15.275006 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:55:15.275150 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:55:15.276334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:55:15.276478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:55:15.277730 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:55:15.277871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:55:15.278890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:55:15.279016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:55:15.281173 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:55:15.281329 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:55:15.282724 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:55:15.282918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:55:15.285840 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:55:15.287273 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:55:15.288872 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:55:15.301592 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:55:15.308880 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:55:15.311112 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:55:15.312273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:55:15.312314 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:55:15.314397 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:55:15.316993 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:55:15.319384 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:55:15.320600 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:15.322096 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:55:15.324086 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:55:15.325100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:55:15.328956 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:55:15.330276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:55:15.332560 systemd-journald[1114]: Time spent on flushing to /var/log/journal/4baf5d8a182a4e109382ae75deff1a48 is 24.203ms for 853 entries. Sep 12 23:55:15.332560 systemd-journald[1114]: System Journal (/var/log/journal/4baf5d8a182a4e109382ae75deff1a48) is 8.0M, max 195.6M, 187.6M free. Sep 12 23:55:15.364704 systemd-journald[1114]: Received client request to flush runtime journal. Sep 12 23:55:15.333948 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:55:15.337019 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:55:15.341945 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:55:15.344583 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:55:15.346032 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:55:15.347182 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:55:15.349813 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:55:15.351085 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:55:15.354664 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:55:15.357960 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:55:15.362922 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:55:15.364238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:55:15.368131 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:55:15.375847 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 23:55:15.383790 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:55:15.384175 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 23:55:15.388940 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Sep 12 23:55:15.389088 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Sep 12 23:55:15.390429 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:55:15.391070 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:55:15.395154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:55:15.403923 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:55:15.410792 kernel: loop1: detected capacity change from 0 to 207008 Sep 12 23:55:15.425989 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:55:15.432933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:55:15.439829 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 23:55:15.445370 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 12 23:55:15.445387 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 12 23:55:15.449329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:55:15.480798 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 23:55:15.486784 kernel: loop4: detected capacity change from 0 to 207008 Sep 12 23:55:15.492785 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 23:55:15.496926 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:55:15.497325 (sd-merge)[1187]: Merged extensions into '/usr'. Sep 12 23:55:15.500791 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:55:15.500810 systemd[1]: Reloading... Sep 12 23:55:15.556820 zram_generator::config[1209]: No configuration found. Sep 12 23:55:15.598554 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:55:15.659163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:15.694644 systemd[1]: Reloading finished in 193 ms. Sep 12 23:55:15.723237 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:55:15.724605 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:55:15.740006 systemd[1]: Starting ensure-sysext.service... Sep 12 23:55:15.741735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:55:15.747694 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:55:15.747704 systemd[1]: Reloading... Sep 12 23:55:15.760321 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:55:15.760953 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:55:15.761705 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:55:15.762023 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 12 23:55:15.762161 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 12 23:55:15.772919 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:55:15.773025 systemd-tmpfiles[1250]: Skipping /boot Sep 12 23:55:15.780255 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:55:15.780820 systemd-tmpfiles[1250]: Skipping /boot Sep 12 23:55:15.797788 zram_generator::config[1276]: No configuration found. Sep 12 23:55:15.885000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:15.920590 systemd[1]: Reloading finished in 172 ms. Sep 12 23:55:15.934909 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:55:15.947188 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:55:15.956208 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:15.958895 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:55:15.961301 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:55:15.967078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:55:15.973673 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:55:15.976831 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:55:15.980586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:15.983772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:55:15.989935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:55:15.992816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:55:15.993830 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:15.995807 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:55:15.997231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:55:15.999132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:55:16.002463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:55:16.002644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:55:16.004440 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Sep 12 23:55:16.007568 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:55:16.010786 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:55:16.012364 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:55:16.012497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:55:16.017183 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:55:16.020209 augenrules[1342]: No rules Sep 12 23:55:16.021479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:16.023298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:55:16.027172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:16.038069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:55:16.041156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:55:16.044408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:55:16.046487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:16.048680 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:55:16.051387 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:55:16.052708 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:55:16.053449 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:55:16.055439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:55:16.056809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:55:16.058184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:55:16.058484 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:55:16.061378 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:55:16.062510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:55:16.079719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:16.083122 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:55:16.088631 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:55:16.091129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:55:16.096247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:55:16.097369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:16.097499 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:55:16.098362 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:55:16.101376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:55:16.101504 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:55:16.103549 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:55:16.103678 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:55:16.104484 systemd-resolved[1316]: Positive Trust Anchors: Sep 12 23:55:16.105364 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:55:16.105475 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:55:16.107310 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:55:16.107453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:55:16.112279 systemd-resolved[1316]: Defaulting to hostname 'linux'. Sep 12 23:55:16.113998 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 23:55:16.114379 systemd[1]: Finished ensure-sysext.service. Sep 12 23:55:16.117207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:55:16.122797 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:55:16.122946 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:55:16.130957 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:55:16.132107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:55:16.132211 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:55:16.134643 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:55:16.136242 systemd-networkd[1374]: lo: Link UP Sep 12 23:55:16.136249 systemd-networkd[1374]: lo: Gained carrier Sep 12 23:55:16.137359 systemd-networkd[1374]: Enumeration completed Sep 12 23:55:16.137471 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:55:16.138013 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:55:16.138016 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:55:16.138593 systemd[1]: Reached target network.target - Network. Sep 12 23:55:16.138614 systemd-networkd[1374]: eth0: Link UP Sep 12 23:55:16.138618 systemd-networkd[1374]: eth0: Gained carrier Sep 12 23:55:16.138630 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:55:16.144829 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:55:16.151849 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1364) Sep 12 23:55:16.160832 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:55:16.167600 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:55:16.173857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:55:16.181480 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:55:16.192049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:55:16.192204 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:55:16.192298 systemd-timesyncd[1394]: Initial clock synchronization to Fri 2025-09-12 23:55:16.058921 UTC. Sep 12 23:55:16.196124 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:55:16.198057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:55:16.204026 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:55:16.212174 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:55:16.225001 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:55:16.234431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:55:16.234736 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:55:16.267328 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:55:16.268631 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:55:16.269593 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:55:16.270524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:55:16.271499 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:55:16.272642 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:55:16.273608 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:55:16.274610 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:55:16.275578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:55:16.275612 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:55:16.276310 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:55:16.277952 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:55:16.280069 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:55:16.289693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:55:16.291779 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:55:16.293112 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:55:16.294045 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:55:16.294829 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:55:16.295550 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:55:16.295581 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:55:16.296496 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:55:16.298319 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:55:16.299588 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:55:16.301556 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:55:16.304977 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:55:16.306922 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:55:16.307586 jq[1420]: false Sep 12 23:55:16.310005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:55:16.314948 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:55:16.318333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:55:16.319637 dbus-daemon[1419]: [system] SELinux support is enabled Sep 12 23:55:16.321596 extend-filesystems[1421]: Found loop3 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found loop4 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found loop5 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda1 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda2 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda3 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found usr Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda4 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda6 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda7 Sep 12 23:55:16.321596 extend-filesystems[1421]: Found vda9 Sep 12 23:55:16.321596 extend-filesystems[1421]: Checking size of /dev/vda9 Sep 12 23:55:16.321717 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:55:16.325457 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:55:16.330151 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:55:16.331254 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:55:16.334964 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:55:16.338253 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:55:16.340099 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:55:16.344270 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:55:16.345363 extend-filesystems[1421]: Resized partition /dev/vda9 Sep 12 23:55:16.351205 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:55:16.350369 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:55:16.351787 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:55:16.352050 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:55:16.352198 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:55:16.354784 jq[1439]: true Sep 12 23:55:16.356776 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1352) Sep 12 23:55:16.356837 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:55:16.357061 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:55:16.357219 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:55:16.373423 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:55:16.376704 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:55:16.376735 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:55:16.380471 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:55:16.380503 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:55:16.380820 jq[1446]: true Sep 12 23:55:16.383211 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:55:16.399190 update_engine[1436]: I20250912 23:55:16.388821 1436 main.cc:92] Flatcar Update Engine starting Sep 12 23:55:16.399190 update_engine[1436]: I20250912 23:55:16.390820 1436 update_check_scheduler.cc:74] Next update check in 4m20s Sep 12 23:55:16.399470 tar[1445]: linux-arm64/LICENSE Sep 12 23:55:16.391089 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:55:16.399706 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:55:16.399706 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:55:16.399706 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:55:16.393604 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:55:16.410210 extend-filesystems[1421]: Resized filesystem in /dev/vda9 Sep 12 23:55:16.410919 tar[1445]: linux-arm64/helm Sep 12 23:55:16.401502 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:55:16.401716 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:55:16.401767 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:55:16.402071 systemd-logind[1433]: New seat seat0. Sep 12 23:55:16.403993 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:55:16.436639 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:55:16.439852 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:55:16.441473 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:55:16.452940 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:55:16.525301 containerd[1447]: time="2025-09-12T23:55:16.525194200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:55:16.550427 containerd[1447]: time="2025-09-12T23:55:16.550176360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:16.551923 containerd[1447]: time="2025-09-12T23:55:16.551689480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:16.551923 containerd[1447]: time="2025-09-12T23:55:16.551721480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:55:16.551923 containerd[1447]: time="2025-09-12T23:55:16.551738120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:55:16.552055 containerd[1447]: time="2025-09-12T23:55:16.552036400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:55:16.552110 containerd[1447]: time="2025-09-12T23:55:16.552097360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:16.552258 containerd[1447]: time="2025-09-12T23:55:16.552215760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:16.552313 containerd[1447]: time="2025-09-12T23:55:16.552300400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:16.552556 containerd[1447]: time="2025-09-12T23:55:16.552533600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:16.552622 containerd[1447]: time="2025-09-12T23:55:16.552608040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:16.553053 containerd[1447]: time="2025-09-12T23:55:16.552673840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:16.553053 containerd[1447]: time="2025-09-12T23:55:16.552690320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:16.553053 containerd[1447]: time="2025-09-12T23:55:16.552799680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:16.553053 containerd[1447]: time="2025-09-12T23:55:16.553016840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:16.553279 containerd[1447]: time="2025-09-12T23:55:16.553258600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:16.553332 containerd[1447]: time="2025-09-12T23:55:16.553320040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:55:16.553465 containerd[1447]: time="2025-09-12T23:55:16.553437080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:55:16.553600 containerd[1447]: time="2025-09-12T23:55:16.553581240Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:55:16.556805 containerd[1447]: time="2025-09-12T23:55:16.556782120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:55:16.556932 containerd[1447]: time="2025-09-12T23:55:16.556915760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:55:16.557017 containerd[1447]: time="2025-09-12T23:55:16.556977760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:55:16.557078 containerd[1447]: time="2025-09-12T23:55:16.557065760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:55:16.557131 containerd[1447]: time="2025-09-12T23:55:16.557119320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:55:16.557316 containerd[1447]: time="2025-09-12T23:55:16.557286760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:55:16.557635 containerd[1447]: time="2025-09-12T23:55:16.557613400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557830280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557855560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557868920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557884640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557897840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557910680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557924360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557942000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557954040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557966240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557979320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.557999240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.558012560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558425 containerd[1447]: time="2025-09-12T23:55:16.558026280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558039360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558054960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558067880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558079360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558097640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558111000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558124640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558135760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558146760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558174760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558196760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558208560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.558735 containerd[1447]: time="2025-09-12T23:55:16.558219280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:55:16.559456 containerd[1447]: time="2025-09-12T23:55:16.559432200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:55:16.559716 containerd[1447]: time="2025-09-12T23:55:16.559696680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:55:16.559800 containerd[1447]: time="2025-09-12T23:55:16.559786240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:55:16.559855 containerd[1447]: time="2025-09-12T23:55:16.559841960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:55:16.559902 containerd[1447]: time="2025-09-12T23:55:16.559889920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.559954 containerd[1447]: time="2025-09-12T23:55:16.559941440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:55:16.560003 containerd[1447]: time="2025-09-12T23:55:16.559991400Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:55:16.560070 containerd[1447]: time="2025-09-12T23:55:16.560057560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:55:16.560503 containerd[1447]: time="2025-09-12T23:55:16.560430720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:55:16.561033 containerd[1447]: time="2025-09-12T23:55:16.560661400Z" level=info msg="Connect containerd service" Sep 12 23:55:16.561033 containerd[1447]: time="2025-09-12T23:55:16.560702680Z" level=info msg="using legacy CRI server" Sep 12 23:55:16.561033 containerd[1447]: time="2025-09-12T23:55:16.560710240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:55:16.561033 containerd[1447]: time="2025-09-12T23:55:16.560836560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:55:16.563042 containerd[1447]: time="2025-09-12T23:55:16.563009120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:55:16.563294 containerd[1447]: time="2025-09-12T23:55:16.563256680Z" level=info msg="Start subscribing containerd event" Sep 12 23:55:16.563362 containerd[1447]: time="2025-09-12T23:55:16.563349920Z" level=info msg="Start recovering state" Sep 12 23:55:16.563499 containerd[1447]: time="2025-09-12T23:55:16.563454640Z" level=info msg="Start event monitor" Sep 12 23:55:16.563615 containerd[1447]: time="2025-09-12T23:55:16.563602240Z" level=info msg="Start snapshots syncer" Sep 12 23:55:16.563678 containerd[1447]: time="2025-09-12T23:55:16.563666880Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:55:16.563889 containerd[1447]: time="2025-09-12T23:55:16.563874280Z" level=info msg="Start streaming server" Sep 12 23:55:16.564031 containerd[1447]: time="2025-09-12T23:55:16.563619520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:55:16.564150 containerd[1447]: time="2025-09-12T23:55:16.564130880Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:55:16.564282 containerd[1447]: time="2025-09-12T23:55:16.564255360Z" level=info msg="containerd successfully booted in 0.041333s" Sep 12 23:55:16.564332 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:55:16.758986 tar[1445]: linux-arm64/README.md Sep 12 23:55:16.780471 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:55:17.185883 systemd-networkd[1374]: eth0: Gained IPv6LL Sep 12 23:55:17.188805 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:55:17.190602 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:55:17.203991 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:55:17.206708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:17.208897 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:55:17.228430 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:55:17.229799 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:55:17.231418 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:55:17.234114 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:55:17.404533 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:55:17.423216 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:55:17.439008 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:55:17.443961 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:55:17.444165 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:55:17.453878 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:55:17.460611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:55:17.463914 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:55:17.466140 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:55:17.467491 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:55:17.751381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:17.752622 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:55:17.754569 systemd[1]: Startup finished in 563ms (kernel) + 5.065s (initrd) + 3.035s (userspace) = 8.663s. Sep 12 23:55:17.756141 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:55:18.103204 kubelet[1531]: E0912 23:55:18.103140 1531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:55:18.105587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:55:18.105725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:55:22.293362 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:55:22.294957 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:43726.service - OpenSSH per-connection server daemon (10.0.0.1:43726). Sep 12 23:55:22.348958 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 43726 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:55:22.350651 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:22.358206 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:55:22.368078 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:55:22.369934 systemd-logind[1433]: New session 1 of user core. Sep 12 23:55:22.378465 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:55:22.388160 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:55:22.391216 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:55:22.478824 systemd[1548]: Queued start job for default target default.target. Sep 12 23:55:22.490775 systemd[1548]: Created slice app.slice - User Application Slice. Sep 12 23:55:22.490802 systemd[1548]: Reached target paths.target - Paths. Sep 12 23:55:22.490814 systemd[1548]: Reached target timers.target - Timers. Sep 12 23:55:22.492140 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:55:22.503026 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:55:22.503143 systemd[1548]: Reached target sockets.target - Sockets. Sep 12 23:55:22.503161 systemd[1548]: Reached target basic.target - Basic System. Sep 12 23:55:22.503198 systemd[1548]: Reached target default.target - Main User Target. Sep 12 23:55:22.503224 systemd[1548]: Startup finished in 101ms. Sep 12 23:55:22.503410 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:55:22.504816 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:55:22.569422 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:43740.service - OpenSSH per-connection server daemon (10.0.0.1:43740). Sep 12 23:55:22.607733 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 43740 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:55:22.609105 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:22.613482 systemd-logind[1433]: New session 2 of user core. Sep 12 23:55:22.622961 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:55:22.674752 sshd[1559]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:22.694250 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:43740.service: Deactivated successfully. Sep 12 23:55:22.695671 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:55:22.698896 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:55:22.699985 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:43746.service - OpenSSH per-connection server daemon (10.0.0.1:43746). Sep 12 23:55:22.700701 systemd-logind[1433]: Removed session 2. Sep 12 23:55:22.738716 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 43746 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:55:22.740078 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:22.744743 systemd-logind[1433]: New session 3 of user core. Sep 12 23:55:22.757973 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:55:22.805707 sshd[1566]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:22.824255 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:43746.service: Deactivated successfully. Sep 12 23:55:22.825858 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:55:22.827111 systemd-logind[1433]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:55:22.828135 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:43752.service - OpenSSH per-connection server daemon (10.0.0.1:43752). Sep 12 23:55:22.829140 systemd-logind[1433]: Removed session 3. Sep 12 23:55:22.867005 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 43752 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:55:22.868386 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:22.872324 systemd-logind[1433]: New session 4 of user core. Sep 12 23:55:22.883944 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:55:22.936332 sshd[1573]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:22.946079 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:43752.service: Deactivated successfully. Sep 12 23:55:22.947853 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:55:22.949091 systemd-logind[1433]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:55:22.950658 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:43760.service - OpenSSH per-connection server daemon (10.0.0.1:43760). Sep 12 23:55:22.951534 systemd-logind[1433]: Removed session 4. Sep 12 23:55:22.989745 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 43760 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:55:22.991206 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:22.995456 systemd-logind[1433]: New session 5 of user core. Sep 12 23:55:23.010957 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:55:23.067741 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:55:23.068062 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:23.084606 sudo[1583]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:23.086246 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:23.099178 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:43760.service: Deactivated successfully. Sep 12 23:55:23.100833 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:55:23.102052 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:55:23.103280 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:43774.service - OpenSSH per-connection server daemon (10.0.0.1:43774). Sep 12 23:55:23.103963 systemd-logind[1433]: Removed session 5. Sep 12 23:55:23.142834 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 43774 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:55:23.144199 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:23.148217 systemd-logind[1433]: New session 6 of user core. Sep 12 23:55:23.159952 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:55:23.212426 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:55:23.212717 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:23.215877 sudo[1592]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:23.220849 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:55:23.221444 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:23.240050 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:23.241375 auditctl[1595]: No rules Sep 12 23:55:23.242262 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:55:23.243830 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:23.245612 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:23.271276 augenrules[1613]: No rules Sep 12 23:55:23.272577 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:23.274044 sudo[1591]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:23.275612 sshd[1588]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:23.285188 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:43774.service: Deactivated successfully. Sep 12 23:55:23.286687 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:55:23.288997 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:55:23.300152 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:43788.service - OpenSSH per-connection server daemon (10.0.0.1:43788). Sep 12 23:55:23.301085 systemd-logind[1433]: Removed session 6. Sep 12 23:55:23.335253 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 43788 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:55:23.336733 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:23.340804 systemd-logind[1433]: New session 7 of user core. Sep 12 23:55:23.354961 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:55:23.408085 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:55:23.408367 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:23.673022 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:55:23.673159 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:55:23.888435 dockerd[1643]: time="2025-09-12T23:55:23.888365281Z" level=info msg="Starting up" Sep 12 23:55:24.011512 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport567021003-merged.mount: Deactivated successfully. Sep 12 23:55:24.027642 dockerd[1643]: time="2025-09-12T23:55:24.027392985Z" level=info msg="Loading containers: start." Sep 12 23:55:24.113792 kernel: Initializing XFRM netlink socket Sep 12 23:55:24.188677 systemd-networkd[1374]: docker0: Link UP Sep 12 23:55:24.208338 dockerd[1643]: time="2025-09-12T23:55:24.208284126Z" level=info msg="Loading containers: done." Sep 12 23:55:24.221863 dockerd[1643]: time="2025-09-12T23:55:24.221808191Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:55:24.222060 dockerd[1643]: time="2025-09-12T23:55:24.221919261Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:55:24.222089 dockerd[1643]: time="2025-09-12T23:55:24.222063950Z" level=info msg="Daemon has completed initialization" Sep 12 23:55:24.252825 dockerd[1643]: time="2025-09-12T23:55:24.252624901Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:55:24.253080 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:55:24.861196 containerd[1447]: time="2025-09-12T23:55:24.861152347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 23:55:25.009532 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1318750590-merged.mount: Deactivated successfully. Sep 12 23:55:25.390677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918278507.mount: Deactivated successfully. Sep 12 23:55:26.548781 containerd[1447]: time="2025-09-12T23:55:26.548711185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:26.549290 containerd[1447]: time="2025-09-12T23:55:26.549263648Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 12 23:55:26.550934 containerd[1447]: time="2025-09-12T23:55:26.550296376Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:26.553354 containerd[1447]: time="2025-09-12T23:55:26.553320693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:26.555335 containerd[1447]: time="2025-09-12T23:55:26.555292381Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.694098575s" Sep 12 23:55:26.555335 containerd[1447]: time="2025-09-12T23:55:26.555332221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 23:55:26.556091 containerd[1447]: time="2025-09-12T23:55:26.556052520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 23:55:27.634571 containerd[1447]: time="2025-09-12T23:55:27.634514583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.635654 containerd[1447]: time="2025-09-12T23:55:27.635625261Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 12 23:55:27.636602 containerd[1447]: time="2025-09-12T23:55:27.636177633Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.640359 containerd[1447]: time="2025-09-12T23:55:27.640319466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.641816 containerd[1447]: time="2025-09-12T23:55:27.641782475Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.085673989s" Sep 12 23:55:27.641921 containerd[1447]: time="2025-09-12T23:55:27.641904299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 23:55:27.642459 containerd[1447]: time="2025-09-12T23:55:27.642434489Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 23:55:28.282558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:55:28.291981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:28.388735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:28.395586 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:55:28.459399 kubelet[1862]: E0912 23:55:28.459344 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:55:28.462503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:55:28.462648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:55:28.926359 containerd[1447]: time="2025-09-12T23:55:28.926309157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:28.927372 containerd[1447]: time="2025-09-12T23:55:28.927345493Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 12 23:55:28.927857 containerd[1447]: time="2025-09-12T23:55:28.927832700Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:28.931141 containerd[1447]: time="2025-09-12T23:55:28.931110220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:28.932317 containerd[1447]: time="2025-09-12T23:55:28.932258087Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.289789105s" Sep 12 23:55:28.932317 containerd[1447]: time="2025-09-12T23:55:28.932291160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 23:55:28.933357 containerd[1447]: time="2025-09-12T23:55:28.932714413Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 23:55:29.967389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1721327073.mount: Deactivated successfully. Sep 12 23:55:30.185049 containerd[1447]: time="2025-09-12T23:55:30.184998758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:30.185943 containerd[1447]: time="2025-09-12T23:55:30.185907326Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 12 23:55:30.186801 containerd[1447]: time="2025-09-12T23:55:30.186561921Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:30.190453 containerd[1447]: time="2025-09-12T23:55:30.188675586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:30.190453 containerd[1447]: time="2025-09-12T23:55:30.190116070Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.257364229s" Sep 12 23:55:30.190453 containerd[1447]: time="2025-09-12T23:55:30.190145782Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 23:55:30.191013 containerd[1447]: time="2025-09-12T23:55:30.190991256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:55:30.865470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117527974.mount: Deactivated successfully. Sep 12 23:55:31.698068 containerd[1447]: time="2025-09-12T23:55:31.698003660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:31.698937 containerd[1447]: time="2025-09-12T23:55:31.698896364Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 23:55:31.701055 containerd[1447]: time="2025-09-12T23:55:31.701020900Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:31.704059 containerd[1447]: time="2025-09-12T23:55:31.704010809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:31.706046 containerd[1447]: time="2025-09-12T23:55:31.705279506Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.514249279s" Sep 12 23:55:31.706046 containerd[1447]: time="2025-09-12T23:55:31.705318246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 23:55:31.706297 containerd[1447]: time="2025-09-12T23:55:31.706275664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:55:32.160636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908307537.mount: Deactivated successfully. Sep 12 23:55:32.168377 containerd[1447]: time="2025-09-12T23:55:32.168317203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.169792 containerd[1447]: time="2025-09-12T23:55:32.169731541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 23:55:32.171261 containerd[1447]: time="2025-09-12T23:55:32.171206942Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.175236 containerd[1447]: time="2025-09-12T23:55:32.175178606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.176066 containerd[1447]: time="2025-09-12T23:55:32.175994889Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 469.636316ms" Sep 12 23:55:32.176066 containerd[1447]: time="2025-09-12T23:55:32.176031966Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:55:32.178068 containerd[1447]: time="2025-09-12T23:55:32.176851083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 23:55:32.689901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948472783.mount: Deactivated successfully. Sep 12 23:55:34.200808 containerd[1447]: time="2025-09-12T23:55:34.199340761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:34.200808 containerd[1447]: time="2025-09-12T23:55:34.199865578Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 12 23:55:34.201169 containerd[1447]: time="2025-09-12T23:55:34.200964526Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:34.204414 containerd[1447]: time="2025-09-12T23:55:34.204377570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:34.205863 containerd[1447]: time="2025-09-12T23:55:34.205827993Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.028947012s" Sep 12 23:55:34.205955 containerd[1447]: time="2025-09-12T23:55:34.205939361Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 23:55:38.532487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:55:38.543186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:38.694478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:38.698260 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:55:38.734123 kubelet[2024]: E0912 23:55:38.734057 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:55:38.738170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:55:38.738650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:55:39.822531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:39.836308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:39.860780 systemd[1]: Reloading requested from client PID 2039 ('systemctl') (unit session-7.scope)... Sep 12 23:55:39.860796 systemd[1]: Reloading... Sep 12 23:55:39.931934 zram_generator::config[2081]: No configuration found. Sep 12 23:55:40.224724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:40.279174 systemd[1]: Reloading finished in 418 ms. Sep 12 23:55:40.324680 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:55:40.324811 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:55:40.325151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:40.327142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:40.427808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:40.432044 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:55:40.466644 kubelet[2124]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:40.466644 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:55:40.466644 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:40.467030 kubelet[2124]: I0912 23:55:40.466686 2124 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:55:41.161264 kubelet[2124]: I0912 23:55:41.160913 2124 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:55:41.161264 kubelet[2124]: I0912 23:55:41.160954 2124 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:55:41.161264 kubelet[2124]: I0912 23:55:41.161239 2124 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:55:41.183309 kubelet[2124]: E0912 23:55:41.183268 2124 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:41.184935 kubelet[2124]: I0912 23:55:41.184905 2124 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:55:41.192738 kubelet[2124]: E0912 23:55:41.192698 2124 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:55:41.195792 kubelet[2124]: I0912 23:55:41.192887 2124 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:55:41.196397 kubelet[2124]: I0912 23:55:41.196374 2124 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:55:41.197125 kubelet[2124]: I0912 23:55:41.197070 2124 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:55:41.197330 kubelet[2124]: I0912 23:55:41.197116 2124 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:55:41.197421 kubelet[2124]: I0912 23:55:41.197402 2124 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:55:41.197450 kubelet[2124]: I0912 23:55:41.197441 2124 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:55:41.197700 kubelet[2124]: I0912 23:55:41.197670 2124 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:41.200325 kubelet[2124]: I0912 23:55:41.200295 2124 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:55:41.200362 kubelet[2124]: I0912 23:55:41.200330 2124 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:55:41.200389 kubelet[2124]: I0912 23:55:41.200370 2124 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:55:41.200389 kubelet[2124]: I0912 23:55:41.200383 2124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:55:41.203988 kubelet[2124]: I0912 23:55:41.203810 2124 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:55:41.203988 kubelet[2124]: W0912 23:55:41.203486 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:41.203988 kubelet[2124]: E0912 23:55:41.203922 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:41.204590 kubelet[2124]: I0912 23:55:41.204551 2124 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:55:41.204694 kubelet[2124]: W0912 23:55:41.204674 2124 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:55:41.205460 kubelet[2124]: W0912 23:55:41.204705 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:41.205460 kubelet[2124]: E0912 23:55:41.204772 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:41.205585 kubelet[2124]: I0912 23:55:41.205507 2124 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:55:41.205585 kubelet[2124]: I0912 23:55:41.205542 2124 server.go:1287] "Started kubelet" Sep 12 23:55:41.207973 kubelet[2124]: I0912 23:55:41.207909 2124 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:55:41.208290 kubelet[2124]: I0912 23:55:41.208258 2124 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:55:41.208369 kubelet[2124]: I0912 23:55:41.208344 2124 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:55:41.209077 kubelet[2124]: I0912 23:55:41.209048 2124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:55:41.209268 kubelet[2124]: I0912 23:55:41.209188 2124 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:55:41.210213 kubelet[2124]: I0912 23:55:41.210185 2124 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:55:41.210422 kubelet[2124]: E0912 23:55:41.210091 2124 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ae3f3cad9a64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:55:41.205518948 +0000 UTC m=+0.770491285,LastTimestamp:2025-09-12 23:55:41.205518948 +0000 UTC m=+0.770491285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:55:41.211014 kubelet[2124]: E0912 23:55:41.210997 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:55:41.211157 kubelet[2124]: I0912 23:55:41.211145 2124 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:55:41.211376 kubelet[2124]: I0912 23:55:41.211353 2124 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:55:41.211453 kubelet[2124]: I0912 23:55:41.211414 2124 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:55:41.211835 kubelet[2124]: E0912 23:55:41.211809 2124 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:55:41.212011 kubelet[2124]: W0912 23:55:41.211967 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:41.212117 kubelet[2124]: E0912 23:55:41.212078 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:41.212265 kubelet[2124]: I0912 23:55:41.212034 2124 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:55:41.212364 kubelet[2124]: I0912 23:55:41.212348 2124 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:55:41.212679 kubelet[2124]: E0912 23:55:41.212587 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Sep 12 23:55:41.213345 kubelet[2124]: I0912 23:55:41.213299 2124 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:55:41.225574 kubelet[2124]: I0912 23:55:41.225546 2124 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:55:41.225574 kubelet[2124]: I0912 23:55:41.225566 2124 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:55:41.225688 kubelet[2124]: I0912 23:55:41.225584 2124 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:41.227630 kubelet[2124]: I0912 23:55:41.227585 2124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:55:41.229184 kubelet[2124]: I0912 23:55:41.228865 2124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:55:41.229184 kubelet[2124]: I0912 23:55:41.228897 2124 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:55:41.229184 kubelet[2124]: I0912 23:55:41.228918 2124 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:55:41.229184 kubelet[2124]: I0912 23:55:41.228926 2124 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:55:41.229184 kubelet[2124]: E0912 23:55:41.228972 2124 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:55:41.229604 kubelet[2124]: W0912 23:55:41.229553 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:41.229657 kubelet[2124]: E0912 23:55:41.229611 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:41.308505 kubelet[2124]: I0912 23:55:41.308443 2124 policy_none.go:49] "None policy: Start" Sep 12 23:55:41.308505 kubelet[2124]: I0912 23:55:41.308475 2124 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:55:41.308505 kubelet[2124]: I0912 23:55:41.308489 2124 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:55:41.311244 kubelet[2124]: E0912 23:55:41.311192 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:55:41.314939 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:55:41.329108 kubelet[2124]: E0912 23:55:41.329051 2124 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:55:41.329572 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:55:41.332272 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:55:41.344846 kubelet[2124]: I0912 23:55:41.344625 2124 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:55:41.344946 kubelet[2124]: I0912 23:55:41.344870 2124 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:55:41.344946 kubelet[2124]: I0912 23:55:41.344882 2124 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:55:41.345179 kubelet[2124]: I0912 23:55:41.345153 2124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:55:41.346192 kubelet[2124]: E0912 23:55:41.346168 2124 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:55:41.346284 kubelet[2124]: E0912 23:55:41.346232 2124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:55:41.414224 kubelet[2124]: E0912 23:55:41.414097 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Sep 12 23:55:41.447251 kubelet[2124]: I0912 23:55:41.447218 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:55:41.447664 kubelet[2124]: E0912 23:55:41.447623 2124 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Sep 12 23:55:41.537707 systemd[1]: Created slice kubepods-burstable-pod5bbb35092dd229a62720579dec2ff373.slice - libcontainer container kubepods-burstable-pod5bbb35092dd229a62720579dec2ff373.slice. Sep 12 23:55:41.550000 kubelet[2124]: E0912 23:55:41.549783 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:41.552886 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 23:55:41.566322 kubelet[2124]: E0912 23:55:41.566277 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:41.568847 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 23:55:41.570518 kubelet[2124]: E0912 23:55:41.570483 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:41.613866 kubelet[2124]: I0912 23:55:41.613824 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:41.613866 kubelet[2124]: I0912 23:55:41.613866 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:41.614007 kubelet[2124]: I0912 23:55:41.613888 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbb35092dd229a62720579dec2ff373-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5bbb35092dd229a62720579dec2ff373\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:41.614007 kubelet[2124]: I0912 23:55:41.613912 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:41.614007 kubelet[2124]: I0912 23:55:41.613927 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:41.614007 kubelet[2124]: I0912 23:55:41.613945 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:41.614007 kubelet[2124]: I0912 23:55:41.613961 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:55:41.614161 kubelet[2124]: I0912 23:55:41.613975 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb35092dd229a62720579dec2ff373-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb35092dd229a62720579dec2ff373\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:41.614161 kubelet[2124]: I0912 23:55:41.613990 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb35092dd229a62720579dec2ff373-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb35092dd229a62720579dec2ff373\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:41.648951 kubelet[2124]: I0912 23:55:41.648922 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:55:41.649594 kubelet[2124]: E0912 23:55:41.649554 2124 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Sep 12 23:55:41.815126 kubelet[2124]: E0912 23:55:41.815070 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Sep 12 23:55:41.850501 kubelet[2124]: E0912 23:55:41.850447 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:41.851439 containerd[1447]: time="2025-09-12T23:55:41.851223296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5bbb35092dd229a62720579dec2ff373,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:41.867555 kubelet[2124]: E0912 23:55:41.867459 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:41.867953 containerd[1447]: time="2025-09-12T23:55:41.867917061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:41.871473 kubelet[2124]: E0912 23:55:41.871185 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:41.871656 containerd[1447]: time="2025-09-12T23:55:41.871618161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:42.051303 kubelet[2124]: I0912 23:55:42.050998 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:55:42.051630 kubelet[2124]: E0912 23:55:42.051604 2124 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Sep 12 23:55:42.053040 kubelet[2124]: W0912 23:55:42.052948 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:42.053040 kubelet[2124]: E0912 23:55:42.053016 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:42.127445 kubelet[2124]: W0912 23:55:42.127300 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:42.127445 kubelet[2124]: E0912 23:55:42.127345 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:42.353780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245795128.mount: Deactivated successfully. Sep 12 23:55:42.363813 containerd[1447]: time="2025-09-12T23:55:42.362948083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:42.364687 containerd[1447]: time="2025-09-12T23:55:42.364513278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:55:42.365649 containerd[1447]: time="2025-09-12T23:55:42.365289660Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:42.366140 containerd[1447]: time="2025-09-12T23:55:42.366099141Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:42.367087 containerd[1447]: time="2025-09-12T23:55:42.367054737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 12 23:55:42.367645 containerd[1447]: time="2025-09-12T23:55:42.367345125Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:42.367909 containerd[1447]: time="2025-09-12T23:55:42.367810850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:55:42.370664 containerd[1447]: time="2025-09-12T23:55:42.370239215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:42.375538 containerd[1447]: time="2025-09-12T23:55:42.375493990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.801757ms" Sep 12 23:55:42.380208 containerd[1447]: time="2025-09-12T23:55:42.379844899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.844853ms" Sep 12 23:55:42.380731 containerd[1447]: time="2025-09-12T23:55:42.380701912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.400429ms" Sep 12 23:55:42.391461 kubelet[2124]: W0912 23:55:42.389894 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:42.391461 kubelet[2124]: E0912 23:55:42.389996 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:42.474499 containerd[1447]: time="2025-09-12T23:55:42.474356612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:42.474499 containerd[1447]: time="2025-09-12T23:55:42.474435965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:42.474499 containerd[1447]: time="2025-09-12T23:55:42.474458751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:42.475195 containerd[1447]: time="2025-09-12T23:55:42.475028055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:42.475195 containerd[1447]: time="2025-09-12T23:55:42.475095815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:42.475906 containerd[1447]: time="2025-09-12T23:55:42.475429258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:42.475906 containerd[1447]: time="2025-09-12T23:55:42.475140748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:42.477574 containerd[1447]: time="2025-09-12T23:55:42.476901668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:42.477574 containerd[1447]: time="2025-09-12T23:55:42.476943243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:42.477574 containerd[1447]: time="2025-09-12T23:55:42.476954037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:42.477574 containerd[1447]: time="2025-09-12T23:55:42.477016640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:42.477574 containerd[1447]: time="2025-09-12T23:55:42.475832499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:42.496963 systemd[1]: Started cri-containerd-21525977da1231db0e22c28b04668d0f36a6a382c4f9d0b0c632d02466ceaa88.scope - libcontainer container 21525977da1231db0e22c28b04668d0f36a6a382c4f9d0b0c632d02466ceaa88. Sep 12 23:55:42.501961 systemd[1]: Started cri-containerd-4877dae7dda4c92b2a2ec66a5ca085339925107f4ad3671bb4750faaed96417d.scope - libcontainer container 4877dae7dda4c92b2a2ec66a5ca085339925107f4ad3671bb4750faaed96417d. Sep 12 23:55:42.504247 systemd[1]: Started cri-containerd-feed1da4e1b14a440ed0080a05a0dc66125cf1a6550a19302fc1e6b0150d3993.scope - libcontainer container feed1da4e1b14a440ed0080a05a0dc66125cf1a6550a19302fc1e6b0150d3993. Sep 12 23:55:42.539934 containerd[1447]: time="2025-09-12T23:55:42.539849392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5bbb35092dd229a62720579dec2ff373,Namespace:kube-system,Attempt:0,} returns sandbox id \"21525977da1231db0e22c28b04668d0f36a6a382c4f9d0b0c632d02466ceaa88\"" Sep 12 23:55:42.541612 kubelet[2124]: E0912 23:55:42.541300 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:42.545040 containerd[1447]: time="2025-09-12T23:55:42.544994152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4877dae7dda4c92b2a2ec66a5ca085339925107f4ad3671bb4750faaed96417d\"" Sep 12 23:55:42.545572 containerd[1447]: time="2025-09-12T23:55:42.545543267Z" level=info msg="CreateContainer within sandbox \"21525977da1231db0e22c28b04668d0f36a6a382c4f9d0b0c632d02466ceaa88\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:55:42.545633 kubelet[2124]: E0912 23:55:42.545591 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:42.546891 containerd[1447]: time="2025-09-12T23:55:42.546858530Z" level=info msg="CreateContainer within sandbox \"4877dae7dda4c92b2a2ec66a5ca085339925107f4ad3671bb4750faaed96417d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:55:42.551300 containerd[1447]: time="2025-09-12T23:55:42.551220912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"feed1da4e1b14a440ed0080a05a0dc66125cf1a6550a19302fc1e6b0150d3993\"" Sep 12 23:55:42.552095 kubelet[2124]: E0912 23:55:42.552067 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:42.554036 containerd[1447]: time="2025-09-12T23:55:42.553932550Z" level=info msg="CreateContainer within sandbox \"feed1da4e1b14a440ed0080a05a0dc66125cf1a6550a19302fc1e6b0150d3993\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:55:42.566300 containerd[1447]: time="2025-09-12T23:55:42.566143414Z" level=info msg="CreateContainer within sandbox \"4877dae7dda4c92b2a2ec66a5ca085339925107f4ad3671bb4750faaed96417d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c437f7e543995b12de0ac21bd69986d498e0cb581b1934e0de4266a3f4fd5d60\"" Sep 12 23:55:42.566969 containerd[1447]: time="2025-09-12T23:55:42.566932228Z" level=info msg="StartContainer for \"c437f7e543995b12de0ac21bd69986d498e0cb581b1934e0de4266a3f4fd5d60\"" Sep 12 23:55:42.567662 containerd[1447]: time="2025-09-12T23:55:42.567606870Z" level=info msg="CreateContainer within sandbox \"21525977da1231db0e22c28b04668d0f36a6a382c4f9d0b0c632d02466ceaa88\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1da91cff3daf165443e3a162761ec49fb468ffd98f12187d89209a49ef20920\"" Sep 12 23:55:42.569136 containerd[1447]: time="2025-09-12T23:55:42.568004155Z" level=info msg="StartContainer for \"b1da91cff3daf165443e3a162761ec49fb468ffd98f12187d89209a49ef20920\"" Sep 12 23:55:42.578932 containerd[1447]: time="2025-09-12T23:55:42.578653542Z" level=info msg="CreateContainer within sandbox \"feed1da4e1b14a440ed0080a05a0dc66125cf1a6550a19302fc1e6b0150d3993\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78455668d179ab97c9ee34d591c7aee7e472fbd5b2f0414c6a8fc5ab50295747\"" Sep 12 23:55:42.579446 containerd[1447]: time="2025-09-12T23:55:42.579410535Z" level=info msg="StartContainer for \"78455668d179ab97c9ee34d591c7aee7e472fbd5b2f0414c6a8fc5ab50295747\"" Sep 12 23:55:42.592949 systemd[1]: Started cri-containerd-b1da91cff3daf165443e3a162761ec49fb468ffd98f12187d89209a49ef20920.scope - libcontainer container b1da91cff3daf165443e3a162761ec49fb468ffd98f12187d89209a49ef20920. Sep 12 23:55:42.596092 systemd[1]: Started cri-containerd-c437f7e543995b12de0ac21bd69986d498e0cb581b1934e0de4266a3f4fd5d60.scope - libcontainer container c437f7e543995b12de0ac21bd69986d498e0cb581b1934e0de4266a3f4fd5d60. Sep 12 23:55:42.607015 systemd[1]: Started cri-containerd-78455668d179ab97c9ee34d591c7aee7e472fbd5b2f0414c6a8fc5ab50295747.scope - libcontainer container 78455668d179ab97c9ee34d591c7aee7e472fbd5b2f0414c6a8fc5ab50295747. Sep 12 23:55:42.615932 kubelet[2124]: E0912 23:55:42.615862 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Sep 12 23:55:42.639636 containerd[1447]: time="2025-09-12T23:55:42.639384336Z" level=info msg="StartContainer for \"b1da91cff3daf165443e3a162761ec49fb468ffd98f12187d89209a49ef20920\" returns successfully" Sep 12 23:55:42.652994 containerd[1447]: time="2025-09-12T23:55:42.652944563Z" level=info msg="StartContainer for \"c437f7e543995b12de0ac21bd69986d498e0cb581b1934e0de4266a3f4fd5d60\" returns successfully" Sep 12 23:55:42.653126 containerd[1447]: time="2025-09-12T23:55:42.653036909Z" level=info msg="StartContainer for \"78455668d179ab97c9ee34d591c7aee7e472fbd5b2f0414c6a8fc5ab50295747\" returns successfully" Sep 12 23:55:42.690785 kubelet[2124]: W0912 23:55:42.690678 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Sep 12 23:55:42.690785 kubelet[2124]: E0912 23:55:42.690746 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:42.853779 kubelet[2124]: I0912 23:55:42.853724 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:55:43.236218 kubelet[2124]: E0912 23:55:43.236176 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:43.236388 kubelet[2124]: E0912 23:55:43.236367 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:43.238027 kubelet[2124]: E0912 23:55:43.237973 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:43.238100 kubelet[2124]: E0912 23:55:43.238093 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:43.238961 kubelet[2124]: E0912 23:55:43.238745 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:43.238961 kubelet[2124]: E0912 23:55:43.238859 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:44.241598 kubelet[2124]: E0912 23:55:44.241121 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:44.241598 kubelet[2124]: E0912 23:55:44.241336 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:55:44.241598 kubelet[2124]: E0912 23:55:44.241537 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:44.243993 kubelet[2124]: E0912 23:55:44.241468 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:44.489018 kubelet[2124]: E0912 23:55:44.488975 2124 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 23:55:44.570541 kubelet[2124]: I0912 23:55:44.570506 2124 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:55:44.570541 kubelet[2124]: E0912 23:55:44.570546 2124 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 23:55:44.595198 kubelet[2124]: E0912 23:55:44.595161 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:55:44.695717 kubelet[2124]: E0912 23:55:44.695677 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:55:44.796429 kubelet[2124]: E0912 23:55:44.796377 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:55:44.911888 kubelet[2124]: I0912 23:55:44.911794 2124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:44.918660 kubelet[2124]: E0912 23:55:44.918625 2124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:44.918660 kubelet[2124]: I0912 23:55:44.918656 2124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:44.920331 kubelet[2124]: E0912 23:55:44.920292 2124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:44.920331 kubelet[2124]: I0912 23:55:44.920322 2124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:55:44.921659 kubelet[2124]: E0912 23:55:44.921628 2124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 23:55:45.205030 kubelet[2124]: I0912 23:55:45.204931 2124 apiserver.go:52] "Watching apiserver" Sep 12 23:55:45.211501 kubelet[2124]: I0912 23:55:45.211469 2124 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:55:46.374466 systemd[1]: Reloading requested from client PID 2398 ('systemctl') (unit session-7.scope)... Sep 12 23:55:46.374480 systemd[1]: Reloading... Sep 12 23:55:46.436859 zram_generator::config[2443]: No configuration found. Sep 12 23:55:46.511387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:46.575745 systemd[1]: Reloading finished in 200 ms. Sep 12 23:55:46.610433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:46.625571 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:55:46.625850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:46.625895 systemd[1]: kubelet.service: Consumed 1.120s CPU time, 129.7M memory peak, 0B memory swap peak. Sep 12 23:55:46.636017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:46.729542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:46.732986 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:55:46.768153 kubelet[2479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:46.768153 kubelet[2479]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:55:46.768153 kubelet[2479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:46.768507 kubelet[2479]: I0912 23:55:46.768199 2479 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:55:46.777131 kubelet[2479]: I0912 23:55:46.777088 2479 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:55:46.777131 kubelet[2479]: I0912 23:55:46.777121 2479 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:55:46.777374 kubelet[2479]: I0912 23:55:46.777344 2479 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:55:46.778920 kubelet[2479]: I0912 23:55:46.778880 2479 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:55:46.783211 kubelet[2479]: I0912 23:55:46.783189 2479 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:55:46.786194 kubelet[2479]: E0912 23:55:46.786133 2479 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:55:46.786194 kubelet[2479]: I0912 23:55:46.786163 2479 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:55:46.790132 kubelet[2479]: I0912 23:55:46.788732 2479 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:55:46.790132 kubelet[2479]: I0912 23:55:46.788956 2479 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:55:46.790132 kubelet[2479]: I0912 23:55:46.788977 2479 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:55:46.790132 kubelet[2479]: I0912 23:55:46.789248 2479 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:55:46.790332 kubelet[2479]: I0912 23:55:46.789257 2479 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:55:46.790332 kubelet[2479]: I0912 23:55:46.789299 2479 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:46.790332 kubelet[2479]: I0912 23:55:46.789425 2479 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:55:46.790332 kubelet[2479]: I0912 23:55:46.789437 2479 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:55:46.790332 kubelet[2479]: I0912 23:55:46.789454 2479 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:55:46.790332 kubelet[2479]: I0912 23:55:46.789463 2479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:55:46.792180 kubelet[2479]: I0912 23:55:46.790919 2479 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:55:46.792180 kubelet[2479]: I0912 23:55:46.791404 2479 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:55:46.792180 kubelet[2479]: I0912 23:55:46.791874 2479 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:55:46.792180 kubelet[2479]: I0912 23:55:46.791901 2479 server.go:1287] "Started kubelet" Sep 12 23:55:46.793815 kubelet[2479]: I0912 23:55:46.793796 2479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:55:46.798876 kubelet[2479]: I0912 23:55:46.798841 2479 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:55:46.799011 kubelet[2479]: E0912 23:55:46.798994 2479 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:55:46.799195 kubelet[2479]: I0912 23:55:46.799180 2479 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:55:46.799302 kubelet[2479]: I0912 23:55:46.799292 2479 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:55:46.803240 kubelet[2479]: I0912 23:55:46.802855 2479 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:55:46.804114 kubelet[2479]: I0912 23:55:46.804082 2479 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:55:46.806685 kubelet[2479]: I0912 23:55:46.804919 2479 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:55:46.806685 kubelet[2479]: I0912 23:55:46.805699 2479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:55:46.806685 kubelet[2479]: I0912 23:55:46.806186 2479 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:55:46.807337 kubelet[2479]: I0912 23:55:46.807308 2479 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:55:46.807506 kubelet[2479]: I0912 23:55:46.807483 2479 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:55:46.809304 kubelet[2479]: I0912 23:55:46.809285 2479 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:55:46.819068 kubelet[2479]: I0912 23:55:46.819024 2479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:55:46.820376 kubelet[2479]: I0912 23:55:46.820356 2479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:55:46.820480 kubelet[2479]: I0912 23:55:46.820469 2479 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:55:46.820556 kubelet[2479]: I0912 23:55:46.820543 2479 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:55:46.820603 kubelet[2479]: I0912 23:55:46.820595 2479 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:55:46.820700 kubelet[2479]: E0912 23:55:46.820677 2479 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:55:46.852271 kubelet[2479]: I0912 23:55:46.852245 2479 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:55:46.852435 kubelet[2479]: I0912 23:55:46.852418 2479 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:55:46.852534 kubelet[2479]: I0912 23:55:46.852523 2479 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:46.852728 kubelet[2479]: I0912 23:55:46.852711 2479 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:55:46.852833 kubelet[2479]: I0912 23:55:46.852808 2479 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:55:46.852897 kubelet[2479]: I0912 23:55:46.852887 2479 policy_none.go:49] "None policy: Start" Sep 12 23:55:46.852949 kubelet[2479]: I0912 23:55:46.852941 2479 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:55:46.853002 kubelet[2479]: I0912 23:55:46.852994 2479 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:55:46.853155 kubelet[2479]: I0912 23:55:46.853142 2479 state_mem.go:75] "Updated machine memory state" Sep 12 23:55:46.856142 kubelet[2479]: I0912 23:55:46.856116 2479 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:55:46.856274 kubelet[2479]: I0912 23:55:46.856258 2479 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:55:46.856312 kubelet[2479]: I0912 23:55:46.856278 2479 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:55:46.856857 kubelet[2479]: I0912 23:55:46.856838 2479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:55:46.858776 kubelet[2479]: E0912 23:55:46.858734 2479 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:55:46.922295 kubelet[2479]: I0912 23:55:46.922179 2479 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:55:46.922295 kubelet[2479]: I0912 23:55:46.922242 2479 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:46.922578 kubelet[2479]: I0912 23:55:46.922539 2479 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:46.960552 kubelet[2479]: I0912 23:55:46.960530 2479 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:55:46.967287 kubelet[2479]: I0912 23:55:46.967264 2479 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 23:55:46.967360 kubelet[2479]: I0912 23:55:46.967348 2479 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:55:46.999825 kubelet[2479]: I0912 23:55:46.999716 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbb35092dd229a62720579dec2ff373-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5bbb35092dd229a62720579dec2ff373\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:46.999825 kubelet[2479]: I0912 23:55:46.999753 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:46.999825 kubelet[2479]: I0912 23:55:46.999790 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:46.999825 kubelet[2479]: I0912 23:55:46.999810 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:46.999825 kubelet[2479]: I0912 23:55:46.999828 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:55:47.000035 kubelet[2479]: I0912 23:55:46.999844 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb35092dd229a62720579dec2ff373-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb35092dd229a62720579dec2ff373\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:47.000035 kubelet[2479]: I0912 23:55:46.999858 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:47.000035 kubelet[2479]: I0912 23:55:46.999872 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:55:47.000035 kubelet[2479]: I0912 23:55:46.999886 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb35092dd229a62720579dec2ff373-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb35092dd229a62720579dec2ff373\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:55:47.229428 kubelet[2479]: E0912 23:55:47.229294 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:47.229518 kubelet[2479]: E0912 23:55:47.229427 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:47.229562 kubelet[2479]: E0912 23:55:47.229543 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:47.790548 kubelet[2479]: I0912 23:55:47.790513 2479 apiserver.go:52] "Watching apiserver" Sep 12 23:55:47.799432 kubelet[2479]: I0912 23:55:47.799387 2479 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:55:47.838250 kubelet[2479]: E0912 23:55:47.838128 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:47.838250 kubelet[2479]: I0912 23:55:47.838174 2479 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:55:47.838740 kubelet[2479]: E0912 23:55:47.838624 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:47.845720 kubelet[2479]: E0912 23:55:47.845544 2479 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 23:55:47.845720 kubelet[2479]: E0912 23:55:47.845715 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:47.863671 kubelet[2479]: I0912 23:55:47.859871 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8598411590000001 podStartE2EDuration="1.859841159s" podCreationTimestamp="2025-09-12 23:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:47.858703464 +0000 UTC m=+1.122934663" watchObservedRunningTime="2025-09-12 23:55:47.859841159 +0000 UTC m=+1.124072398" Sep 12 23:55:47.880710 kubelet[2479]: I0912 23:55:47.880657 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.880640297 podStartE2EDuration="1.880640297s" podCreationTimestamp="2025-09-12 23:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:47.880603868 +0000 UTC m=+1.144835067" watchObservedRunningTime="2025-09-12 23:55:47.880640297 +0000 UTC m=+1.144871536" Sep 12 23:55:47.880710 kubelet[2479]: I0912 23:55:47.880736 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.88073147 podStartE2EDuration="1.88073147s" podCreationTimestamp="2025-09-12 23:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:47.870107369 +0000 UTC m=+1.134338608" watchObservedRunningTime="2025-09-12 23:55:47.88073147 +0000 UTC m=+1.144962709" Sep 12 23:55:48.839596 kubelet[2479]: E0912 23:55:48.839354 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:48.839596 kubelet[2479]: E0912 23:55:48.839534 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:49.840635 kubelet[2479]: E0912 23:55:49.840551 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:49.933826 kubelet[2479]: E0912 23:55:49.933536 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:51.997242 kubelet[2479]: I0912 23:55:51.997013 2479 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:55:51.997820 kubelet[2479]: I0912 23:55:51.997578 2479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:55:51.997889 containerd[1447]: time="2025-09-12T23:55:51.997342084Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:55:52.766923 systemd[1]: Created slice kubepods-besteffort-pod95ec7981_ce41_4b69_a951_58135ee4cbab.slice - libcontainer container kubepods-besteffort-pod95ec7981_ce41_4b69_a951_58135ee4cbab.slice. Sep 12 23:55:52.839823 kubelet[2479]: I0912 23:55:52.839679 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95ec7981-ce41-4b69-a951-58135ee4cbab-kube-proxy\") pod \"kube-proxy-vpww7\" (UID: \"95ec7981-ce41-4b69-a951-58135ee4cbab\") " pod="kube-system/kube-proxy-vpww7" Sep 12 23:55:52.839823 kubelet[2479]: I0912 23:55:52.839722 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95ec7981-ce41-4b69-a951-58135ee4cbab-xtables-lock\") pod \"kube-proxy-vpww7\" (UID: \"95ec7981-ce41-4b69-a951-58135ee4cbab\") " pod="kube-system/kube-proxy-vpww7" Sep 12 23:55:52.839823 kubelet[2479]: I0912 23:55:52.839743 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95ec7981-ce41-4b69-a951-58135ee4cbab-lib-modules\") pod \"kube-proxy-vpww7\" (UID: \"95ec7981-ce41-4b69-a951-58135ee4cbab\") " pod="kube-system/kube-proxy-vpww7" Sep 12 23:55:52.839823 kubelet[2479]: I0912 23:55:52.839778 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42tbc\" (UniqueName: \"kubernetes.io/projected/95ec7981-ce41-4b69-a951-58135ee4cbab-kube-api-access-42tbc\") pod \"kube-proxy-vpww7\" (UID: \"95ec7981-ce41-4b69-a951-58135ee4cbab\") " pod="kube-system/kube-proxy-vpww7" Sep 12 23:55:53.077252 kubelet[2479]: E0912 23:55:53.077199 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:53.077902 systemd[1]: Created slice kubepods-besteffort-poda72ee7d7_8e08_436d_9246_2bee8a73f0ad.slice - libcontainer container kubepods-besteffort-poda72ee7d7_8e08_436d_9246_2bee8a73f0ad.slice. Sep 12 23:55:53.079391 containerd[1447]: time="2025-09-12T23:55:53.077966505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vpww7,Uid:95ec7981-ce41-4b69-a951-58135ee4cbab,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:53.099629 containerd[1447]: time="2025-09-12T23:55:53.099543158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:53.099629 containerd[1447]: time="2025-09-12T23:55:53.099591356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:53.099865 containerd[1447]: time="2025-09-12T23:55:53.099603155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:53.099865 containerd[1447]: time="2025-09-12T23:55:53.099679072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:53.124948 systemd[1]: Started cri-containerd-4f3b338e59844a877575cb4080b75b7312788f9a399576dd01b0b73f6c77a9ca.scope - libcontainer container 4f3b338e59844a877575cb4080b75b7312788f9a399576dd01b0b73f6c77a9ca. Sep 12 23:55:53.142103 containerd[1447]: time="2025-09-12T23:55:53.142042811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vpww7,Uid:95ec7981-ce41-4b69-a951-58135ee4cbab,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f3b338e59844a877575cb4080b75b7312788f9a399576dd01b0b73f6c77a9ca\"" Sep 12 23:55:53.142314 kubelet[2479]: I0912 23:55:53.142290 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a72ee7d7-8e08-436d-9246-2bee8a73f0ad-var-lib-calico\") pod \"tigera-operator-755d956888-577fh\" (UID: \"a72ee7d7-8e08-436d-9246-2bee8a73f0ad\") " pod="tigera-operator/tigera-operator-755d956888-577fh" Sep 12 23:55:53.142377 kubelet[2479]: I0912 23:55:53.142325 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjlng\" (UniqueName: \"kubernetes.io/projected/a72ee7d7-8e08-436d-9246-2bee8a73f0ad-kube-api-access-cjlng\") pod \"tigera-operator-755d956888-577fh\" (UID: \"a72ee7d7-8e08-436d-9246-2bee8a73f0ad\") " pod="tigera-operator/tigera-operator-755d956888-577fh" Sep 12 23:55:53.142828 kubelet[2479]: E0912 23:55:53.142805 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:53.145637 containerd[1447]: time="2025-09-12T23:55:53.145568336Z" level=info msg="CreateContainer within sandbox \"4f3b338e59844a877575cb4080b75b7312788f9a399576dd01b0b73f6c77a9ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:55:53.157544 containerd[1447]: time="2025-09-12T23:55:53.157484973Z" level=info msg="CreateContainer within sandbox \"4f3b338e59844a877575cb4080b75b7312788f9a399576dd01b0b73f6c77a9ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1216aec359cdb60acfbb4f5b665688b50acb94b66e0c7eb1a3a1ec28b1b159f1\"" Sep 12 23:55:53.159134 containerd[1447]: time="2025-09-12T23:55:53.158183422Z" level=info msg="StartContainer for \"1216aec359cdb60acfbb4f5b665688b50acb94b66e0c7eb1a3a1ec28b1b159f1\"" Sep 12 23:55:53.182910 systemd[1]: Started cri-containerd-1216aec359cdb60acfbb4f5b665688b50acb94b66e0c7eb1a3a1ec28b1b159f1.scope - libcontainer container 1216aec359cdb60acfbb4f5b665688b50acb94b66e0c7eb1a3a1ec28b1b159f1. Sep 12 23:55:53.205532 containerd[1447]: time="2025-09-12T23:55:53.205469825Z" level=info msg="StartContainer for \"1216aec359cdb60acfbb4f5b665688b50acb94b66e0c7eb1a3a1ec28b1b159f1\" returns successfully" Sep 12 23:55:53.383277 containerd[1447]: time="2025-09-12T23:55:53.383168821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-577fh,Uid:a72ee7d7-8e08-436d-9246-2bee8a73f0ad,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:55:53.402290 containerd[1447]: time="2025-09-12T23:55:53.402155347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:53.402290 containerd[1447]: time="2025-09-12T23:55:53.402232064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:53.402803 containerd[1447]: time="2025-09-12T23:55:53.402251703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:53.402803 containerd[1447]: time="2025-09-12T23:55:53.402694523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:53.423926 systemd[1]: Started cri-containerd-65b09e499c37d080c98f5f25b0b5c42a96c764298a745116a3b11e5aba46ecc5.scope - libcontainer container 65b09e499c37d080c98f5f25b0b5c42a96c764298a745116a3b11e5aba46ecc5. Sep 12 23:55:53.449038 containerd[1447]: time="2025-09-12T23:55:53.448986370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-577fh,Uid:a72ee7d7-8e08-436d-9246-2bee8a73f0ad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"65b09e499c37d080c98f5f25b0b5c42a96c764298a745116a3b11e5aba46ecc5\"" Sep 12 23:55:53.450884 containerd[1447]: time="2025-09-12T23:55:53.450853968Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:55:53.856133 kubelet[2479]: E0912 23:55:53.856057 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:54.837687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826289330.mount: Deactivated successfully. Sep 12 23:55:55.125416 containerd[1447]: time="2025-09-12T23:55:55.125295075Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:55.126894 containerd[1447]: time="2025-09-12T23:55:55.126433590Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 23:55:55.127524 containerd[1447]: time="2025-09-12T23:55:55.127468790Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:55.130166 containerd[1447]: time="2025-09-12T23:55:55.129950532Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:55.131355 containerd[1447]: time="2025-09-12T23:55:55.131328718Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.680436711s" Sep 12 23:55:55.131473 containerd[1447]: time="2025-09-12T23:55:55.131456913Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 23:55:55.135438 containerd[1447]: time="2025-09-12T23:55:55.135313961Z" level=info msg="CreateContainer within sandbox \"65b09e499c37d080c98f5f25b0b5c42a96c764298a745116a3b11e5aba46ecc5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:55:55.146036 containerd[1447]: time="2025-09-12T23:55:55.145980941Z" level=info msg="CreateContainer within sandbox \"65b09e499c37d080c98f5f25b0b5c42a96c764298a745116a3b11e5aba46ecc5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0e9d1f2c9fc92487604e49653d968399140afb78766ed5339a7187b839c5350\"" Sep 12 23:55:55.146678 containerd[1447]: time="2025-09-12T23:55:55.146651035Z" level=info msg="StartContainer for \"a0e9d1f2c9fc92487604e49653d968399140afb78766ed5339a7187b839c5350\"" Sep 12 23:55:55.172961 systemd[1]: Started cri-containerd-a0e9d1f2c9fc92487604e49653d968399140afb78766ed5339a7187b839c5350.scope - libcontainer container a0e9d1f2c9fc92487604e49653d968399140afb78766ed5339a7187b839c5350. Sep 12 23:55:55.287048 containerd[1447]: time="2025-09-12T23:55:55.286965795Z" level=info msg="StartContainer for \"a0e9d1f2c9fc92487604e49653d968399140afb78766ed5339a7187b839c5350\" returns successfully" Sep 12 23:55:55.873148 kubelet[2479]: I0912 23:55:55.872912 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vpww7" podStartSLOduration=3.872893863 podStartE2EDuration="3.872893863s" podCreationTimestamp="2025-09-12 23:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:53.866164368 +0000 UTC m=+7.130395607" watchObservedRunningTime="2025-09-12 23:55:55.872893863 +0000 UTC m=+9.137125102" Sep 12 23:55:55.873148 kubelet[2479]: I0912 23:55:55.873037 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-577fh" podStartSLOduration=1.189275682 podStartE2EDuration="2.873031938s" podCreationTimestamp="2025-09-12 23:55:53 +0000 UTC" firstStartedPulling="2025-09-12 23:55:53.450022205 +0000 UTC m=+6.714253444" lastFinishedPulling="2025-09-12 23:55:55.133778462 +0000 UTC m=+8.398009700" observedRunningTime="2025-09-12 23:55:55.872270167 +0000 UTC m=+9.136501406" watchObservedRunningTime="2025-09-12 23:55:55.873031938 +0000 UTC m=+9.137263217" Sep 12 23:55:57.182467 kubelet[2479]: E0912 23:55:57.182275 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:57.867056 kubelet[2479]: E0912 23:55:57.867015 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:58.079601 kubelet[2479]: E0912 23:55:58.079569 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:58.868728 kubelet[2479]: E0912 23:55:58.868658 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:58.869244 kubelet[2479]: E0912 23:55:58.869222 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:55:59.945983 kubelet[2479]: E0912 23:55:59.945412 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:00.711405 sudo[1625]: pam_unix(sudo:session): session closed for user root Sep 12 23:56:00.714585 sshd[1621]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:00.720203 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:56:00.720388 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:43788.service: Deactivated successfully. Sep 12 23:56:00.721916 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:56:00.722069 systemd[1]: session-7.scope: Consumed 7.422s CPU time, 152.9M memory peak, 0B memory swap peak. Sep 12 23:56:00.724117 systemd-logind[1433]: Removed session 7. Sep 12 23:56:02.092938 update_engine[1436]: I20250912 23:56:02.092872 1436 update_attempter.cc:509] Updating boot flags... Sep 12 23:56:02.163593 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2889) Sep 12 23:56:02.266786 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2888) Sep 12 23:56:02.317833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2888) Sep 12 23:56:05.961403 systemd[1]: Created slice kubepods-besteffort-poda056dba5_179b_484e_95e4_6acf2fa86521.slice - libcontainer container kubepods-besteffort-poda056dba5_179b_484e_95e4_6acf2fa86521.slice. Sep 12 23:56:06.041440 kubelet[2479]: I0912 23:56:06.041390 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056dba5-179b-484e-95e4-6acf2fa86521-tigera-ca-bundle\") pod \"calico-typha-5756fccd9-5r2pd\" (UID: \"a056dba5-179b-484e-95e4-6acf2fa86521\") " pod="calico-system/calico-typha-5756fccd9-5r2pd" Sep 12 23:56:06.041440 kubelet[2479]: I0912 23:56:06.041443 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a056dba5-179b-484e-95e4-6acf2fa86521-typha-certs\") pod \"calico-typha-5756fccd9-5r2pd\" (UID: \"a056dba5-179b-484e-95e4-6acf2fa86521\") " pod="calico-system/calico-typha-5756fccd9-5r2pd" Sep 12 23:56:06.041912 kubelet[2479]: I0912 23:56:06.041464 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xglxk\" (UniqueName: \"kubernetes.io/projected/a056dba5-179b-484e-95e4-6acf2fa86521-kube-api-access-xglxk\") pod \"calico-typha-5756fccd9-5r2pd\" (UID: \"a056dba5-179b-484e-95e4-6acf2fa86521\") " pod="calico-system/calico-typha-5756fccd9-5r2pd" Sep 12 23:56:06.269695 kubelet[2479]: E0912 23:56:06.267848 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:06.272304 containerd[1447]: time="2025-09-12T23:56:06.272263999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5756fccd9-5r2pd,Uid:a056dba5-179b-484e-95e4-6acf2fa86521,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:06.387623 systemd[1]: Created slice kubepods-besteffort-podacec4f06_6a66_4d53_a72d_7cb574763246.slice - libcontainer container kubepods-besteffort-podacec4f06_6a66_4d53_a72d_7cb574763246.slice. Sep 12 23:56:06.432974 containerd[1447]: time="2025-09-12T23:56:06.432422923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:06.432974 containerd[1447]: time="2025-09-12T23:56:06.432838274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:06.432974 containerd[1447]: time="2025-09-12T23:56:06.432851234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:06.433155 containerd[1447]: time="2025-09-12T23:56:06.432971991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:06.449975 kubelet[2479]: I0912 23:56:06.445476 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-xtables-lock\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.449975 kubelet[2479]: I0912 23:56:06.449912 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7h4\" (UniqueName: \"kubernetes.io/projected/acec4f06-6a66-4d53-a72d-7cb574763246-kube-api-access-4d7h4\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.449975 kubelet[2479]: I0912 23:56:06.449946 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-policysync\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.450962 kubelet[2479]: I0912 23:56:06.449971 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-var-run-calico\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.450962 kubelet[2479]: I0912 23:56:06.450671 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-cni-bin-dir\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.450962 kubelet[2479]: I0912 23:56:06.450707 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-cni-log-dir\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.450962 kubelet[2479]: I0912 23:56:06.450725 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/acec4f06-6a66-4d53-a72d-7cb574763246-node-certs\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.450962 kubelet[2479]: I0912 23:56:06.450748 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-flexvol-driver-host\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.452644 kubelet[2479]: I0912 23:56:06.450782 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acec4f06-6a66-4d53-a72d-7cb574763246-tigera-ca-bundle\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.452644 kubelet[2479]: I0912 23:56:06.450803 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-cni-net-dir\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.452644 kubelet[2479]: I0912 23:56:06.450821 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-lib-modules\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.452644 kubelet[2479]: I0912 23:56:06.450853 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/acec4f06-6a66-4d53-a72d-7cb574763246-var-lib-calico\") pod \"calico-node-tq8pp\" (UID: \"acec4f06-6a66-4d53-a72d-7cb574763246\") " pod="calico-system/calico-node-tq8pp" Sep 12 23:56:06.461963 systemd[1]: Started cri-containerd-d3d9b4a65794a7241c96e6d2217d8d21a193f730828d9e2a293e928a383351cd.scope - libcontainer container d3d9b4a65794a7241c96e6d2217d8d21a193f730828d9e2a293e928a383351cd. Sep 12 23:56:06.500177 containerd[1447]: time="2025-09-12T23:56:06.500139563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5756fccd9-5r2pd,Uid:a056dba5-179b-484e-95e4-6acf2fa86521,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3d9b4a65794a7241c96e6d2217d8d21a193f730828d9e2a293e928a383351cd\"" Sep 12 23:56:06.503626 kubelet[2479]: E0912 23:56:06.503536 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:06.505307 containerd[1447]: time="2025-09-12T23:56:06.505268768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:56:06.538751 kubelet[2479]: E0912 23:56:06.538710 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w5b24" podUID="2a2755c3-03c7-4f05-b24d-8c93e47436ce" Sep 12 23:56:06.582794 kubelet[2479]: E0912 23:56:06.580591 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.582794 kubelet[2479]: W0912 23:56:06.580673 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.589100 kubelet[2479]: E0912 23:56:06.589073 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.589276 kubelet[2479]: W0912 23:56:06.589258 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.589445 kubelet[2479]: E0912 23:56:06.589428 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.591675 kubelet[2479]: E0912 23:56:06.591640 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.622012 kubelet[2479]: E0912 23:56:06.621981 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.622012 kubelet[2479]: W0912 23:56:06.622005 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.622169 kubelet[2479]: E0912 23:56:06.622025 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.622217 kubelet[2479]: E0912 23:56:06.622177 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.622264 kubelet[2479]: W0912 23:56:06.622221 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.622297 kubelet[2479]: E0912 23:56:06.622264 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.622486 kubelet[2479]: E0912 23:56:06.622468 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.622486 kubelet[2479]: W0912 23:56:06.622485 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.622552 kubelet[2479]: E0912 23:56:06.622496 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.622662 kubelet[2479]: E0912 23:56:06.622651 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.622662 kubelet[2479]: W0912 23:56:06.622661 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.622727 kubelet[2479]: E0912 23:56:06.622670 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.622866 kubelet[2479]: E0912 23:56:06.622843 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.622866 kubelet[2479]: W0912 23:56:06.622855 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.622866 kubelet[2479]: E0912 23:56:06.622864 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.624163 kubelet[2479]: E0912 23:56:06.624145 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.624163 kubelet[2479]: W0912 23:56:06.624161 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.624264 kubelet[2479]: E0912 23:56:06.624175 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.624399 kubelet[2479]: E0912 23:56:06.624386 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.624399 kubelet[2479]: W0912 23:56:06.624398 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.624455 kubelet[2479]: E0912 23:56:06.624414 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.624593 kubelet[2479]: E0912 23:56:06.624582 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.624593 kubelet[2479]: W0912 23:56:06.624592 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.624652 kubelet[2479]: E0912 23:56:06.624601 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.624833 kubelet[2479]: E0912 23:56:06.624819 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.624869 kubelet[2479]: W0912 23:56:06.624832 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.624869 kubelet[2479]: E0912 23:56:06.624843 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.625010 kubelet[2479]: E0912 23:56:06.624997 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.625010 kubelet[2479]: W0912 23:56:06.625009 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.625076 kubelet[2479]: E0912 23:56:06.625018 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.625353 kubelet[2479]: E0912 23:56:06.625303 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.625353 kubelet[2479]: W0912 23:56:06.625317 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.625353 kubelet[2479]: E0912 23:56:06.625345 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.626008 kubelet[2479]: E0912 23:56:06.625989 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.626008 kubelet[2479]: W0912 23:56:06.626005 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.626103 kubelet[2479]: E0912 23:56:06.626017 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.626356 kubelet[2479]: E0912 23:56:06.626332 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.626356 kubelet[2479]: W0912 23:56:06.626346 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.626356 kubelet[2479]: E0912 23:56:06.626358 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.626558 kubelet[2479]: E0912 23:56:06.626544 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.626558 kubelet[2479]: W0912 23:56:06.626554 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.626789 kubelet[2479]: E0912 23:56:06.626563 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.626789 kubelet[2479]: E0912 23:56:06.626733 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.626789 kubelet[2479]: W0912 23:56:06.626742 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.626789 kubelet[2479]: E0912 23:56:06.626749 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.626985 kubelet[2479]: E0912 23:56:06.626967 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.626985 kubelet[2479]: W0912 23:56:06.626980 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.627052 kubelet[2479]: E0912 23:56:06.626989 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.627227 kubelet[2479]: E0912 23:56:06.627212 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.627227 kubelet[2479]: W0912 23:56:06.627223 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.627308 kubelet[2479]: E0912 23:56:06.627233 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.627410 kubelet[2479]: E0912 23:56:06.627396 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.627482 kubelet[2479]: W0912 23:56:06.627465 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.627513 kubelet[2479]: E0912 23:56:06.627484 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.627709 kubelet[2479]: E0912 23:56:06.627698 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.627709 kubelet[2479]: W0912 23:56:06.627708 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.627790 kubelet[2479]: E0912 23:56:06.627718 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.627973 kubelet[2479]: E0912 23:56:06.627944 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.628104 kubelet[2479]: W0912 23:56:06.627975 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.628104 kubelet[2479]: E0912 23:56:06.627986 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.652575 kubelet[2479]: E0912 23:56:06.652464 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.652575 kubelet[2479]: W0912 23:56:06.652489 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.652575 kubelet[2479]: E0912 23:56:06.652508 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.652575 kubelet[2479]: I0912 23:56:06.652541 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrd9\" (UniqueName: \"kubernetes.io/projected/2a2755c3-03c7-4f05-b24d-8c93e47436ce-kube-api-access-9lrd9\") pod \"csi-node-driver-w5b24\" (UID: \"2a2755c3-03c7-4f05-b24d-8c93e47436ce\") " pod="calico-system/csi-node-driver-w5b24" Sep 12 23:56:06.652872 kubelet[2479]: E0912 23:56:06.652848 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.652872 kubelet[2479]: W0912 23:56:06.652868 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.652938 kubelet[2479]: E0912 23:56:06.652889 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.653113 kubelet[2479]: E0912 23:56:06.653102 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.653113 kubelet[2479]: W0912 23:56:06.653112 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.653182 kubelet[2479]: E0912 23:56:06.653127 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.653503 kubelet[2479]: E0912 23:56:06.653490 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.653503 kubelet[2479]: W0912 23:56:06.653502 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.653559 kubelet[2479]: E0912 23:56:06.653512 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.653559 kubelet[2479]: I0912 23:56:06.653536 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a2755c3-03c7-4f05-b24d-8c93e47436ce-socket-dir\") pod \"csi-node-driver-w5b24\" (UID: \"2a2755c3-03c7-4f05-b24d-8c93e47436ce\") " pod="calico-system/csi-node-driver-w5b24" Sep 12 23:56:06.653751 kubelet[2479]: E0912 23:56:06.653738 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.653809 kubelet[2479]: W0912 23:56:06.653751 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.653809 kubelet[2479]: E0912 23:56:06.653782 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.653809 kubelet[2479]: I0912 23:56:06.653799 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a2755c3-03c7-4f05-b24d-8c93e47436ce-kubelet-dir\") pod \"csi-node-driver-w5b24\" (UID: \"2a2755c3-03c7-4f05-b24d-8c93e47436ce\") " pod="calico-system/csi-node-driver-w5b24" Sep 12 23:56:06.654066 kubelet[2479]: E0912 23:56:06.654048 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.654103 kubelet[2479]: W0912 23:56:06.654068 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.654103 kubelet[2479]: E0912 23:56:06.654088 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.654510 kubelet[2479]: E0912 23:56:06.654495 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.654510 kubelet[2479]: W0912 23:56:06.654510 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.654573 kubelet[2479]: E0912 23:56:06.654528 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.654888 kubelet[2479]: E0912 23:56:06.654873 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.654928 kubelet[2479]: W0912 23:56:06.654889 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.654928 kubelet[2479]: E0912 23:56:06.654902 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.655031 kubelet[2479]: I0912 23:56:06.654922 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a2755c3-03c7-4f05-b24d-8c93e47436ce-registration-dir\") pod \"csi-node-driver-w5b24\" (UID: \"2a2755c3-03c7-4f05-b24d-8c93e47436ce\") " pod="calico-system/csi-node-driver-w5b24" Sep 12 23:56:06.655439 kubelet[2479]: E0912 23:56:06.655422 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.655479 kubelet[2479]: W0912 23:56:06.655439 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.655479 kubelet[2479]: E0912 23:56:06.655459 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.655525 kubelet[2479]: I0912 23:56:06.655483 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2a2755c3-03c7-4f05-b24d-8c93e47436ce-varrun\") pod \"csi-node-driver-w5b24\" (UID: \"2a2755c3-03c7-4f05-b24d-8c93e47436ce\") " pod="calico-system/csi-node-driver-w5b24" Sep 12 23:56:06.655828 kubelet[2479]: E0912 23:56:06.655803 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.655858 kubelet[2479]: W0912 23:56:06.655830 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.655908 kubelet[2479]: E0912 23:56:06.655892 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.656159 kubelet[2479]: E0912 23:56:06.656144 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.656229 kubelet[2479]: W0912 23:56:06.656213 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.656303 kubelet[2479]: E0912 23:56:06.656277 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.656422 kubelet[2479]: E0912 23:56:06.656410 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.656452 kubelet[2479]: W0912 23:56:06.656423 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.656500 kubelet[2479]: E0912 23:56:06.656461 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.656595 kubelet[2479]: E0912 23:56:06.656586 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.656627 kubelet[2479]: W0912 23:56:06.656595 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.656627 kubelet[2479]: E0912 23:56:06.656612 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.656806 kubelet[2479]: E0912 23:56:06.656797 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.656806 kubelet[2479]: W0912 23:56:06.656806 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.656869 kubelet[2479]: E0912 23:56:06.656816 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.656995 kubelet[2479]: E0912 23:56:06.656984 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.656995 kubelet[2479]: W0912 23:56:06.656994 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.657069 kubelet[2479]: E0912 23:56:06.657002 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.691085 containerd[1447]: time="2025-09-12T23:56:06.691030397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tq8pp,Uid:acec4f06-6a66-4d53-a72d-7cb574763246,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:06.730202 containerd[1447]: time="2025-09-12T23:56:06.730056681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:06.730202 containerd[1447]: time="2025-09-12T23:56:06.730136159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:06.730202 containerd[1447]: time="2025-09-12T23:56:06.730151798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:06.730575 containerd[1447]: time="2025-09-12T23:56:06.730524150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:06.757263 kubelet[2479]: E0912 23:56:06.756826 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.757263 kubelet[2479]: W0912 23:56:06.756857 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.757263 kubelet[2479]: E0912 23:56:06.756878 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.758666 kubelet[2479]: E0912 23:56:06.758628 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.758666 kubelet[2479]: W0912 23:56:06.758649 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.758666 kubelet[2479]: E0912 23:56:06.758672 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.759909 kubelet[2479]: E0912 23:56:06.759875 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.759909 kubelet[2479]: W0912 23:56:06.759898 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.760006 kubelet[2479]: E0912 23:56:06.759918 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.762857 kubelet[2479]: E0912 23:56:06.762830 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.762857 kubelet[2479]: W0912 23:56:06.762854 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.762954 kubelet[2479]: E0912 23:56:06.762931 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.765903 kubelet[2479]: E0912 23:56:06.764997 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.765903 kubelet[2479]: W0912 23:56:06.765014 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.765903 kubelet[2479]: E0912 23:56:06.765102 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.765903 kubelet[2479]: E0912 23:56:06.765393 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.765903 kubelet[2479]: W0912 23:56:06.765408 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.765903 kubelet[2479]: E0912 23:56:06.765447 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.765903 kubelet[2479]: E0912 23:56:06.765585 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.765903 kubelet[2479]: W0912 23:56:06.765595 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.765662 systemd[1]: Started cri-containerd-6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72.scope - libcontainer container 6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72. Sep 12 23:56:06.766174 kubelet[2479]: E0912 23:56:06.765914 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.766174 kubelet[2479]: W0912 23:56:06.765928 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.766174 kubelet[2479]: E0912 23:56:06.765947 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.766261 kubelet[2479]: E0912 23:56:06.766201 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.766261 kubelet[2479]: W0912 23:56:06.766213 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.766261 kubelet[2479]: E0912 23:56:06.766231 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.766625 kubelet[2479]: E0912 23:56:06.766444 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.766625 kubelet[2479]: W0912 23:56:06.766463 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.766625 kubelet[2479]: E0912 23:56:06.766482 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.766625 kubelet[2479]: E0912 23:56:06.766573 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.766739 kubelet[2479]: E0912 23:56:06.766667 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.766739 kubelet[2479]: W0912 23:56:06.766676 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.766904 kubelet[2479]: E0912 23:56:06.766882 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.767021 kubelet[2479]: E0912 23:56:06.766996 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.767021 kubelet[2479]: W0912 23:56:06.767016 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.767081 kubelet[2479]: E0912 23:56:06.767054 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.768916 kubelet[2479]: E0912 23:56:06.768886 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.768916 kubelet[2479]: W0912 23:56:06.768906 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.768999 kubelet[2479]: E0912 23:56:06.768929 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.769924 kubelet[2479]: E0912 23:56:06.769893 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.769924 kubelet[2479]: W0912 23:56:06.769914 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.770023 kubelet[2479]: E0912 23:56:06.769960 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.770178 kubelet[2479]: E0912 23:56:06.770155 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.770178 kubelet[2479]: W0912 23:56:06.770168 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.770243 kubelet[2479]: E0912 23:56:06.770208 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.770521 kubelet[2479]: E0912 23:56:06.770497 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.770521 kubelet[2479]: W0912 23:56:06.770512 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.770592 kubelet[2479]: E0912 23:56:06.770548 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.770870 kubelet[2479]: E0912 23:56:06.770850 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.770870 kubelet[2479]: W0912 23:56:06.770865 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.770943 kubelet[2479]: E0912 23:56:06.770895 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.771671 kubelet[2479]: E0912 23:56:06.771649 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.771671 kubelet[2479]: W0912 23:56:06.771665 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.771945 kubelet[2479]: E0912 23:56:06.771749 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.771983 kubelet[2479]: E0912 23:56:06.771968 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.771983 kubelet[2479]: W0912 23:56:06.771980 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.772125 kubelet[2479]: E0912 23:56:06.772095 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.772870 kubelet[2479]: E0912 23:56:06.772842 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.772870 kubelet[2479]: W0912 23:56:06.772859 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.772965 kubelet[2479]: E0912 23:56:06.772901 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.773146 kubelet[2479]: E0912 23:56:06.773128 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.773146 kubelet[2479]: W0912 23:56:06.773142 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.773291 kubelet[2479]: E0912 23:56:06.773269 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.773829 kubelet[2479]: E0912 23:56:06.773801 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.773829 kubelet[2479]: W0912 23:56:06.773818 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.773919 kubelet[2479]: E0912 23:56:06.773880 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.774774 kubelet[2479]: E0912 23:56:06.774740 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.774829 kubelet[2479]: W0912 23:56:06.774783 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.774897 kubelet[2479]: E0912 23:56:06.774875 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.775041 kubelet[2479]: E0912 23:56:06.775019 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.775041 kubelet[2479]: W0912 23:56:06.775036 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.775107 kubelet[2479]: E0912 23:56:06.775049 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.775713 kubelet[2479]: E0912 23:56:06.775690 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.775713 kubelet[2479]: W0912 23:56:06.775708 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.775812 kubelet[2479]: E0912 23:56:06.775722 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.788926 kubelet[2479]: E0912 23:56:06.788837 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:06.788926 kubelet[2479]: W0912 23:56:06.788860 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:06.788926 kubelet[2479]: E0912 23:56:06.788882 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:06.800703 containerd[1447]: time="2025-09-12T23:56:06.800661735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tq8pp,Uid:acec4f06-6a66-4d53-a72d-7cb574763246,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72\"" Sep 12 23:56:07.700455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383162554.mount: Deactivated successfully. Sep 12 23:56:07.821111 kubelet[2479]: E0912 23:56:07.821065 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w5b24" podUID="2a2755c3-03c7-4f05-b24d-8c93e47436ce" Sep 12 23:56:08.043310 containerd[1447]: time="2025-09-12T23:56:08.043265978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:08.044185 containerd[1447]: time="2025-09-12T23:56:08.043922885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 23:56:08.044935 containerd[1447]: time="2025-09-12T23:56:08.044887145Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:08.047431 containerd[1447]: time="2025-09-12T23:56:08.047258136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:08.048032 containerd[1447]: time="2025-09-12T23:56:08.048002081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.542689754s" Sep 12 23:56:08.048220 containerd[1447]: time="2025-09-12T23:56:08.048092599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 23:56:08.049357 containerd[1447]: time="2025-09-12T23:56:08.049329734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:56:08.074263 containerd[1447]: time="2025-09-12T23:56:08.074205785Z" level=info msg="CreateContainer within sandbox \"d3d9b4a65794a7241c96e6d2217d8d21a193f730828d9e2a293e928a383351cd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:56:08.086808 containerd[1447]: time="2025-09-12T23:56:08.086732288Z" level=info msg="CreateContainer within sandbox \"d3d9b4a65794a7241c96e6d2217d8d21a193f730828d9e2a293e928a383351cd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"be6cb94dc3e909a58d8b72c52ad899346115bdb63cd3bb2d26310cd1c34cbd36\"" Sep 12 23:56:08.087432 containerd[1447]: time="2025-09-12T23:56:08.087373075Z" level=info msg="StartContainer for \"be6cb94dc3e909a58d8b72c52ad899346115bdb63cd3bb2d26310cd1c34cbd36\"" Sep 12 23:56:08.114992 systemd[1]: Started cri-containerd-be6cb94dc3e909a58d8b72c52ad899346115bdb63cd3bb2d26310cd1c34cbd36.scope - libcontainer container be6cb94dc3e909a58d8b72c52ad899346115bdb63cd3bb2d26310cd1c34cbd36. Sep 12 23:56:08.184095 containerd[1447]: time="2025-09-12T23:56:08.184051616Z" level=info msg="StartContainer for \"be6cb94dc3e909a58d8b72c52ad899346115bdb63cd3bb2d26310cd1c34cbd36\" returns successfully" Sep 12 23:56:08.890947 kubelet[2479]: E0912 23:56:08.890918 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:08.904627 kubelet[2479]: I0912 23:56:08.904355 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5756fccd9-5r2pd" podStartSLOduration=2.3598571919999998 podStartE2EDuration="3.904338268s" podCreationTimestamp="2025-09-12 23:56:05 +0000 UTC" firstStartedPulling="2025-09-12 23:56:06.504692141 +0000 UTC m=+19.768923380" lastFinishedPulling="2025-09-12 23:56:08.049173217 +0000 UTC m=+21.313404456" observedRunningTime="2025-09-12 23:56:08.903629522 +0000 UTC m=+22.167860761" watchObservedRunningTime="2025-09-12 23:56:08.904338268 +0000 UTC m=+22.168569507" Sep 12 23:56:08.940977 kubelet[2479]: E0912 23:56:08.940948 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.941241 kubelet[2479]: W0912 23:56:08.941113 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.941241 kubelet[2479]: E0912 23:56:08.941156 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.941385 kubelet[2479]: E0912 23:56:08.941372 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.941441 kubelet[2479]: W0912 23:56:08.941431 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.941498 kubelet[2479]: E0912 23:56:08.941487 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.941712 kubelet[2479]: E0912 23:56:08.941698 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.941876 kubelet[2479]: W0912 23:56:08.941778 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.941876 kubelet[2479]: E0912 23:56:08.941795 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.942020 kubelet[2479]: E0912 23:56:08.942008 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.942079 kubelet[2479]: W0912 23:56:08.942067 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.942148 kubelet[2479]: E0912 23:56:08.942126 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.942447 kubelet[2479]: E0912 23:56:08.942363 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.942447 kubelet[2479]: W0912 23:56:08.942374 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.942447 kubelet[2479]: E0912 23:56:08.942384 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.942606 kubelet[2479]: E0912 23:56:08.942594 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.942658 kubelet[2479]: W0912 23:56:08.942647 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.942717 kubelet[2479]: E0912 23:56:08.942707 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.943064 kubelet[2479]: E0912 23:56:08.942972 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.943064 kubelet[2479]: W0912 23:56:08.942983 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.943064 kubelet[2479]: E0912 23:56:08.942992 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.943288 kubelet[2479]: E0912 23:56:08.943275 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.943344 kubelet[2479]: W0912 23:56:08.943332 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.943488 kubelet[2479]: E0912 23:56:08.943390 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.943680 kubelet[2479]: E0912 23:56:08.943588 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.943680 kubelet[2479]: W0912 23:56:08.943601 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.943680 kubelet[2479]: E0912 23:56:08.943610 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.943855 kubelet[2479]: E0912 23:56:08.943842 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.943919 kubelet[2479]: W0912 23:56:08.943908 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.943979 kubelet[2479]: E0912 23:56:08.943967 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.944216 kubelet[2479]: E0912 23:56:08.944203 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.944286 kubelet[2479]: W0912 23:56:08.944274 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.944431 kubelet[2479]: E0912 23:56:08.944342 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.944539 kubelet[2479]: E0912 23:56:08.944527 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.944598 kubelet[2479]: W0912 23:56:08.944587 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.944658 kubelet[2479]: E0912 23:56:08.944646 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.944895 kubelet[2479]: E0912 23:56:08.944882 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.944972 kubelet[2479]: W0912 23:56:08.944960 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.945087 kubelet[2479]: E0912 23:56:08.945036 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.945523 kubelet[2479]: E0912 23:56:08.945392 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.945523 kubelet[2479]: W0912 23:56:08.945406 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.945523 kubelet[2479]: E0912 23:56:08.945416 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.945717 kubelet[2479]: E0912 23:56:08.945703 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.945820 kubelet[2479]: W0912 23:56:08.945806 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.945927 kubelet[2479]: E0912 23:56:08.945914 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.981972 kubelet[2479]: E0912 23:56:08.981932 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.981972 kubelet[2479]: W0912 23:56:08.981956 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.981972 kubelet[2479]: E0912 23:56:08.981975 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.982305 kubelet[2479]: E0912 23:56:08.982201 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.982305 kubelet[2479]: W0912 23:56:08.982210 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.982305 kubelet[2479]: E0912 23:56:08.982224 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.983245 kubelet[2479]: E0912 23:56:08.982406 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.983245 kubelet[2479]: W0912 23:56:08.982418 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.983245 kubelet[2479]: E0912 23:56:08.982431 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.983245 kubelet[2479]: E0912 23:56:08.982626 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.983245 kubelet[2479]: W0912 23:56:08.982634 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.983245 kubelet[2479]: E0912 23:56:08.982647 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.983245 kubelet[2479]: E0912 23:56:08.982795 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.983245 kubelet[2479]: W0912 23:56:08.982803 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.983245 kubelet[2479]: E0912 23:56:08.982817 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.983245 kubelet[2479]: E0912 23:56:08.982950 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984200 kubelet[2479]: W0912 23:56:08.982963 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984200 kubelet[2479]: E0912 23:56:08.982977 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.984200 kubelet[2479]: E0912 23:56:08.983181 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984200 kubelet[2479]: W0912 23:56:08.983191 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984200 kubelet[2479]: E0912 23:56:08.983208 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.984200 kubelet[2479]: E0912 23:56:08.983485 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984200 kubelet[2479]: W0912 23:56:08.983502 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984200 kubelet[2479]: E0912 23:56:08.983549 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.984200 kubelet[2479]: E0912 23:56:08.983742 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984200 kubelet[2479]: W0912 23:56:08.983751 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984479 kubelet[2479]: E0912 23:56:08.983921 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.984479 kubelet[2479]: E0912 23:56:08.984119 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984479 kubelet[2479]: W0912 23:56:08.984141 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984479 kubelet[2479]: E0912 23:56:08.984170 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.984479 kubelet[2479]: E0912 23:56:08.984327 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984479 kubelet[2479]: W0912 23:56:08.984336 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984479 kubelet[2479]: E0912 23:56:08.984346 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.984630 kubelet[2479]: E0912 23:56:08.984492 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984630 kubelet[2479]: W0912 23:56:08.984499 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984630 kubelet[2479]: E0912 23:56:08.984507 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.984694 kubelet[2479]: E0912 23:56:08.984662 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.984694 kubelet[2479]: W0912 23:56:08.984672 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.984739 kubelet[2479]: E0912 23:56:08.984718 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.985193 kubelet[2479]: E0912 23:56:08.985120 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.985193 kubelet[2479]: W0912 23:56:08.985145 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.985193 kubelet[2479]: E0912 23:56:08.985161 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.985452 kubelet[2479]: E0912 23:56:08.985325 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.985452 kubelet[2479]: W0912 23:56:08.985338 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.985452 kubelet[2479]: E0912 23:56:08.985347 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.985870 kubelet[2479]: E0912 23:56:08.985483 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.985870 kubelet[2479]: W0912 23:56:08.985493 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.985870 kubelet[2479]: E0912 23:56:08.985502 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.986061 kubelet[2479]: E0912 23:56:08.985950 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.986061 kubelet[2479]: W0912 23:56:08.985962 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.986061 kubelet[2479]: E0912 23:56:08.985972 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.987142 kubelet[2479]: E0912 23:56:08.987109 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:08.987142 kubelet[2479]: W0912 23:56:08.987140 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:08.987367 kubelet[2479]: E0912 23:56:08.987154 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:08.993385 containerd[1447]: time="2025-09-12T23:56:08.993318886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:08.994002 containerd[1447]: time="2025-09-12T23:56:08.993957073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 23:56:08.995443 containerd[1447]: time="2025-09-12T23:56:08.995412283Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:08.998376 containerd[1447]: time="2025-09-12T23:56:08.998341183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:08.999337 containerd[1447]: time="2025-09-12T23:56:08.999079168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 949.715514ms" Sep 12 23:56:08.999337 containerd[1447]: time="2025-09-12T23:56:08.999121167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:56:09.002808 containerd[1447]: time="2025-09-12T23:56:09.002768733Z" level=info msg="CreateContainer within sandbox \"6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:56:09.014089 containerd[1447]: time="2025-09-12T23:56:09.013970434Z" level=info msg="CreateContainer within sandbox \"6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894\"" Sep 12 23:56:09.014755 containerd[1447]: time="2025-09-12T23:56:09.014472424Z" level=info msg="StartContainer for \"fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894\"" Sep 12 23:56:09.047999 systemd[1]: Started cri-containerd-fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894.scope - libcontainer container fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894. Sep 12 23:56:09.084221 containerd[1447]: time="2025-09-12T23:56:09.084034182Z" level=info msg="StartContainer for \"fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894\" returns successfully" Sep 12 23:56:09.086872 systemd[1]: cri-containerd-fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894.scope: Deactivated successfully. Sep 12 23:56:09.125061 containerd[1447]: time="2025-09-12T23:56:09.121955400Z" level=info msg="shim disconnected" id=fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894 namespace=k8s.io Sep 12 23:56:09.125061 containerd[1447]: time="2025-09-12T23:56:09.125051779Z" level=warning msg="cleaning up after shim disconnected" id=fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894 namespace=k8s.io Sep 12 23:56:09.125321 containerd[1447]: time="2025-09-12T23:56:09.125074739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:09.150554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc7d1b20c5178dfc60fcae714347353289fd4cf03db08272a3b90e9f4a2a7894-rootfs.mount: Deactivated successfully. Sep 12 23:56:09.821295 kubelet[2479]: E0912 23:56:09.821226 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w5b24" podUID="2a2755c3-03c7-4f05-b24d-8c93e47436ce" Sep 12 23:56:09.895716 kubelet[2479]: I0912 23:56:09.895685 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:56:09.896370 kubelet[2479]: E0912 23:56:09.896113 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:09.897460 containerd[1447]: time="2025-09-12T23:56:09.896628674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:56:11.821912 kubelet[2479]: E0912 23:56:11.821799 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w5b24" podUID="2a2755c3-03c7-4f05-b24d-8c93e47436ce" Sep 12 23:56:11.977727 containerd[1447]: time="2025-09-12T23:56:11.977676046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:11.978298 containerd[1447]: time="2025-09-12T23:56:11.978253756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 23:56:11.979137 containerd[1447]: time="2025-09-12T23:56:11.979108740Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:11.981806 containerd[1447]: time="2025-09-12T23:56:11.981329461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:11.982189 containerd[1447]: time="2025-09-12T23:56:11.982071007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.085398575s" Sep 12 23:56:11.982189 containerd[1447]: time="2025-09-12T23:56:11.982104167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 23:56:11.984072 containerd[1447]: time="2025-09-12T23:56:11.983945574Z" level=info msg="CreateContainer within sandbox \"6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:56:12.008846 containerd[1447]: time="2025-09-12T23:56:12.008781174Z" level=info msg="CreateContainer within sandbox \"6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f\"" Sep 12 23:56:12.009614 containerd[1447]: time="2025-09-12T23:56:12.009343524Z" level=info msg="StartContainer for \"a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f\"" Sep 12 23:56:12.037952 systemd[1]: Started cri-containerd-a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f.scope - libcontainer container a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f. Sep 12 23:56:12.065120 containerd[1447]: time="2025-09-12T23:56:12.064997166Z" level=info msg="StartContainer for \"a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f\" returns successfully" Sep 12 23:56:12.633469 systemd[1]: cri-containerd-a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f.scope: Deactivated successfully. Sep 12 23:56:12.664430 kubelet[2479]: I0912 23:56:12.664398 2479 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:56:12.702306 containerd[1447]: time="2025-09-12T23:56:12.702222481Z" level=info msg="shim disconnected" id=a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f namespace=k8s.io Sep 12 23:56:12.702306 containerd[1447]: time="2025-09-12T23:56:12.702280640Z" level=warning msg="cleaning up after shim disconnected" id=a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f namespace=k8s.io Sep 12 23:56:12.702306 containerd[1447]: time="2025-09-12T23:56:12.702289320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:12.734309 systemd[1]: Created slice kubepods-besteffort-pod4ac91082_d88e_4327_ad72_86092b0b92eb.slice - libcontainer container kubepods-besteffort-pod4ac91082_d88e_4327_ad72_86092b0b92eb.slice. Sep 12 23:56:12.754288 systemd[1]: Created slice kubepods-burstable-pod6a49c785_0fd4_496d_8891_33121806033d.slice - libcontainer container kubepods-burstable-pod6a49c785_0fd4_496d_8891_33121806033d.slice. Sep 12 23:56:12.760940 systemd[1]: Created slice kubepods-burstable-pod9c36fd1a_8b9e_4673_89de_740f2dd47379.slice - libcontainer container kubepods-burstable-pod9c36fd1a_8b9e_4673_89de_740f2dd47379.slice. Sep 12 23:56:12.768339 systemd[1]: Created slice kubepods-besteffort-pod84ebbe55_2c84_464d_aba6_aefb412ce42b.slice - libcontainer container kubepods-besteffort-pod84ebbe55_2c84_464d_aba6_aefb412ce42b.slice. Sep 12 23:56:12.773373 systemd[1]: Created slice kubepods-besteffort-pod2afa9b7a_4677_403d_8e2e_d6308cb04db4.slice - libcontainer container kubepods-besteffort-pod2afa9b7a_4677_403d_8e2e_d6308cb04db4.slice. Sep 12 23:56:12.778859 systemd[1]: Created slice kubepods-besteffort-poda133ea3e_84b0_48ca_a5f9_b58285cab3ba.slice - libcontainer container kubepods-besteffort-poda133ea3e_84b0_48ca_a5f9_b58285cab3ba.slice. Sep 12 23:56:12.784006 systemd[1]: Created slice kubepods-besteffort-pod6e072d2b_0542_4e7a_92e2_10800c8d71d7.slice - libcontainer container kubepods-besteffort-pod6e072d2b_0542_4e7a_92e2_10800c8d71d7.slice. Sep 12 23:56:12.808562 kubelet[2479]: I0912 23:56:12.808520 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-ca-bundle\") pod \"whisker-5cc464486f-psq9v\" (UID: \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\") " pod="calico-system/whisker-5cc464486f-psq9v" Sep 12 23:56:12.808686 kubelet[2479]: I0912 23:56:12.808583 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgbtm\" (UniqueName: \"kubernetes.io/projected/6e072d2b-0542-4e7a-92e2-10800c8d71d7-kube-api-access-fgbtm\") pod \"calico-apiserver-6dbd74567-2bsfz\" (UID: \"6e072d2b-0542-4e7a-92e2-10800c8d71d7\") " pod="calico-apiserver/calico-apiserver-6dbd74567-2bsfz" Sep 12 23:56:12.808686 kubelet[2479]: I0912 23:56:12.808612 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp2fk\" (UniqueName: \"kubernetes.io/projected/a133ea3e-84b0-48ca-a5f9-b58285cab3ba-kube-api-access-wp2fk\") pod \"goldmane-54d579b49d-fbbp9\" (UID: \"a133ea3e-84b0-48ca-a5f9-b58285cab3ba\") " pod="calico-system/goldmane-54d579b49d-fbbp9" Sep 12 23:56:12.808686 kubelet[2479]: I0912 23:56:12.808633 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9fpx\" (UniqueName: \"kubernetes.io/projected/2afa9b7a-4677-403d-8e2e-d6308cb04db4-kube-api-access-n9fpx\") pod \"whisker-5cc464486f-psq9v\" (UID: \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\") " pod="calico-system/whisker-5cc464486f-psq9v" Sep 12 23:56:12.808686 kubelet[2479]: I0912 23:56:12.808668 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a133ea3e-84b0-48ca-a5f9-b58285cab3ba-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-fbbp9\" (UID: \"a133ea3e-84b0-48ca-a5f9-b58285cab3ba\") " pod="calico-system/goldmane-54d579b49d-fbbp9" Sep 12 23:56:12.808808 kubelet[2479]: I0912 23:56:12.808710 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkxwz\" (UniqueName: \"kubernetes.io/projected/6a49c785-0fd4-496d-8891-33121806033d-kube-api-access-dkxwz\") pod \"coredns-668d6bf9bc-ngcpg\" (UID: \"6a49c785-0fd4-496d-8891-33121806033d\") " pod="kube-system/coredns-668d6bf9bc-ngcpg" Sep 12 23:56:12.808808 kubelet[2479]: I0912 23:56:12.808729 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e072d2b-0542-4e7a-92e2-10800c8d71d7-calico-apiserver-certs\") pod \"calico-apiserver-6dbd74567-2bsfz\" (UID: \"6e072d2b-0542-4e7a-92e2-10800c8d71d7\") " pod="calico-apiserver/calico-apiserver-6dbd74567-2bsfz" Sep 12 23:56:12.808808 kubelet[2479]: I0912 23:56:12.808744 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a133ea3e-84b0-48ca-a5f9-b58285cab3ba-goldmane-key-pair\") pod \"goldmane-54d579b49d-fbbp9\" (UID: \"a133ea3e-84b0-48ca-a5f9-b58285cab3ba\") " pod="calico-system/goldmane-54d579b49d-fbbp9" Sep 12 23:56:12.808808 kubelet[2479]: I0912 23:56:12.808771 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ac91082-d88e-4327-ad72-86092b0b92eb-tigera-ca-bundle\") pod \"calico-kube-controllers-66cb8fb495-tc6df\" (UID: \"4ac91082-d88e-4327-ad72-86092b0b92eb\") " pod="calico-system/calico-kube-controllers-66cb8fb495-tc6df" Sep 12 23:56:12.808808 kubelet[2479]: I0912 23:56:12.808787 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2gx\" (UniqueName: \"kubernetes.io/projected/9c36fd1a-8b9e-4673-89de-740f2dd47379-kube-api-access-jk2gx\") pod \"coredns-668d6bf9bc-4f7pq\" (UID: \"9c36fd1a-8b9e-4673-89de-740f2dd47379\") " pod="kube-system/coredns-668d6bf9bc-4f7pq" Sep 12 23:56:12.808923 kubelet[2479]: I0912 23:56:12.808815 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84ebbe55-2c84-464d-aba6-aefb412ce42b-calico-apiserver-certs\") pod \"calico-apiserver-6dbd74567-jv87r\" (UID: \"84ebbe55-2c84-464d-aba6-aefb412ce42b\") " pod="calico-apiserver/calico-apiserver-6dbd74567-jv87r" Sep 12 23:56:12.808923 kubelet[2479]: I0912 23:56:12.808831 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a133ea3e-84b0-48ca-a5f9-b58285cab3ba-config\") pod \"goldmane-54d579b49d-fbbp9\" (UID: \"a133ea3e-84b0-48ca-a5f9-b58285cab3ba\") " pod="calico-system/goldmane-54d579b49d-fbbp9" Sep 12 23:56:12.808923 kubelet[2479]: I0912 23:56:12.808847 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a49c785-0fd4-496d-8891-33121806033d-config-volume\") pod \"coredns-668d6bf9bc-ngcpg\" (UID: \"6a49c785-0fd4-496d-8891-33121806033d\") " pod="kube-system/coredns-668d6bf9bc-ngcpg" Sep 12 23:56:12.808923 kubelet[2479]: I0912 23:56:12.808864 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2l6\" (UniqueName: \"kubernetes.io/projected/4ac91082-d88e-4327-ad72-86092b0b92eb-kube-api-access-fp2l6\") pod \"calico-kube-controllers-66cb8fb495-tc6df\" (UID: \"4ac91082-d88e-4327-ad72-86092b0b92eb\") " pod="calico-system/calico-kube-controllers-66cb8fb495-tc6df" Sep 12 23:56:12.808923 kubelet[2479]: I0912 23:56:12.808880 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c36fd1a-8b9e-4673-89de-740f2dd47379-config-volume\") pod \"coredns-668d6bf9bc-4f7pq\" (UID: \"9c36fd1a-8b9e-4673-89de-740f2dd47379\") " pod="kube-system/coredns-668d6bf9bc-4f7pq" Sep 12 23:56:12.809040 kubelet[2479]: I0912 23:56:12.808898 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5tbb\" (UniqueName: \"kubernetes.io/projected/84ebbe55-2c84-464d-aba6-aefb412ce42b-kube-api-access-t5tbb\") pod \"calico-apiserver-6dbd74567-jv87r\" (UID: \"84ebbe55-2c84-464d-aba6-aefb412ce42b\") " pod="calico-apiserver/calico-apiserver-6dbd74567-jv87r" Sep 12 23:56:12.809040 kubelet[2479]: I0912 23:56:12.808914 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-backend-key-pair\") pod \"whisker-5cc464486f-psq9v\" (UID: \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\") " pod="calico-system/whisker-5cc464486f-psq9v" Sep 12 23:56:12.904702 containerd[1447]: time="2025-09-12T23:56:12.903880411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:56:13.004981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1f1469abe322b7455339d4ca08d4baa71860275fd35b9822eb73ae57110321f-rootfs.mount: Deactivated successfully. Sep 12 23:56:13.053391 containerd[1447]: time="2025-09-12T23:56:13.053302796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb8fb495-tc6df,Uid:4ac91082-d88e-4327-ad72-86092b0b92eb,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:13.057678 kubelet[2479]: E0912 23:56:13.057536 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:13.058527 containerd[1447]: time="2025-09-12T23:56:13.058246194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ngcpg,Uid:6a49c785-0fd4-496d-8891-33121806033d,Namespace:kube-system,Attempt:0,}" Sep 12 23:56:13.065014 kubelet[2479]: E0912 23:56:13.064922 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:13.068233 containerd[1447]: time="2025-09-12T23:56:13.068063032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f7pq,Uid:9c36fd1a-8b9e-4673-89de-740f2dd47379,Namespace:kube-system,Attempt:0,}" Sep 12 23:56:13.072519 containerd[1447]: time="2025-09-12T23:56:13.072149765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-jv87r,Uid:84ebbe55-2c84-464d-aba6-aefb412ce42b,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:56:13.076879 containerd[1447]: time="2025-09-12T23:56:13.076847887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cc464486f-psq9v,Uid:2afa9b7a-4677-403d-8e2e-d6308cb04db4,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:13.081318 containerd[1447]: time="2025-09-12T23:56:13.081287014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fbbp9,Uid:a133ea3e-84b0-48ca-a5f9-b58285cab3ba,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:13.089059 containerd[1447]: time="2025-09-12T23:56:13.088925168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-2bsfz,Uid:6e072d2b-0542-4e7a-92e2-10800c8d71d7,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:56:13.194328 containerd[1447]: time="2025-09-12T23:56:13.194201749Z" level=error msg="Failed to destroy network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.195256 containerd[1447]: time="2025-09-12T23:56:13.195179493Z" level=error msg="encountered an error cleaning up failed sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.195326 containerd[1447]: time="2025-09-12T23:56:13.195279171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ngcpg,Uid:6a49c785-0fd4-496d-8891-33121806033d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.199289 kubelet[2479]: E0912 23:56:13.199171 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.202064 kubelet[2479]: E0912 23:56:13.201877 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ngcpg" Sep 12 23:56:13.202064 kubelet[2479]: E0912 23:56:13.201924 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ngcpg" Sep 12 23:56:13.202064 kubelet[2479]: E0912 23:56:13.201990 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ngcpg_kube-system(6a49c785-0fd4-496d-8891-33121806033d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ngcpg_kube-system(6a49c785-0fd4-496d-8891-33121806033d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ngcpg" podUID="6a49c785-0fd4-496d-8891-33121806033d" Sep 12 23:56:13.204605 containerd[1447]: time="2025-09-12T23:56:13.204559058Z" level=error msg="Failed to destroy network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.205850 containerd[1447]: time="2025-09-12T23:56:13.205513802Z" level=error msg="encountered an error cleaning up failed sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.206461 containerd[1447]: time="2025-09-12T23:56:13.206401868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb8fb495-tc6df,Uid:4ac91082-d88e-4327-ad72-86092b0b92eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.207030 kubelet[2479]: E0912 23:56:13.206680 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.207030 kubelet[2479]: E0912 23:56:13.206723 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cb8fb495-tc6df" Sep 12 23:56:13.207030 kubelet[2479]: E0912 23:56:13.206748 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cb8fb495-tc6df" Sep 12 23:56:13.207170 kubelet[2479]: E0912 23:56:13.206804 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66cb8fb495-tc6df_calico-system(4ac91082-d88e-4327-ad72-86092b0b92eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66cb8fb495-tc6df_calico-system(4ac91082-d88e-4327-ad72-86092b0b92eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66cb8fb495-tc6df" podUID="4ac91082-d88e-4327-ad72-86092b0b92eb" Sep 12 23:56:13.212874 containerd[1447]: time="2025-09-12T23:56:13.212824242Z" level=error msg="Failed to destroy network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.213402 containerd[1447]: time="2025-09-12T23:56:13.213352073Z" level=error msg="encountered an error cleaning up failed sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.213754 containerd[1447]: time="2025-09-12T23:56:13.213485511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f7pq,Uid:9c36fd1a-8b9e-4673-89de-740f2dd47379,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.214417 kubelet[2479]: E0912 23:56:13.214105 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.214417 kubelet[2479]: E0912 23:56:13.214170 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4f7pq" Sep 12 23:56:13.214417 kubelet[2479]: E0912 23:56:13.214188 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4f7pq" Sep 12 23:56:13.214544 kubelet[2479]: E0912 23:56:13.214232 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4f7pq_kube-system(9c36fd1a-8b9e-4673-89de-740f2dd47379)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4f7pq_kube-system(9c36fd1a-8b9e-4673-89de-740f2dd47379)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4f7pq" podUID="9c36fd1a-8b9e-4673-89de-740f2dd47379" Sep 12 23:56:13.224445 containerd[1447]: time="2025-09-12T23:56:13.224317132Z" level=error msg="Failed to destroy network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.225412 containerd[1447]: time="2025-09-12T23:56:13.225372674Z" level=error msg="encountered an error cleaning up failed sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.225505 containerd[1447]: time="2025-09-12T23:56:13.225428673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-jv87r,Uid:84ebbe55-2c84-464d-aba6-aefb412ce42b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.225904 kubelet[2479]: E0912 23:56:13.225627 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.225904 kubelet[2479]: E0912 23:56:13.225682 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbd74567-jv87r" Sep 12 23:56:13.225904 kubelet[2479]: E0912 23:56:13.225702 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbd74567-jv87r" Sep 12 23:56:13.225994 kubelet[2479]: E0912 23:56:13.225771 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dbd74567-jv87r_calico-apiserver(84ebbe55-2c84-464d-aba6-aefb412ce42b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dbd74567-jv87r_calico-apiserver(84ebbe55-2c84-464d-aba6-aefb412ce42b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbd74567-jv87r" podUID="84ebbe55-2c84-464d-aba6-aefb412ce42b" Sep 12 23:56:13.235796 containerd[1447]: time="2025-09-12T23:56:13.235490067Z" level=error msg="Failed to destroy network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.236390 containerd[1447]: time="2025-09-12T23:56:13.236149816Z" level=error msg="encountered an error cleaning up failed sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.236493 containerd[1447]: time="2025-09-12T23:56:13.236370773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fbbp9,Uid:a133ea3e-84b0-48ca-a5f9-b58285cab3ba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.236706 kubelet[2479]: E0912 23:56:13.236657 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.236819 kubelet[2479]: E0912 23:56:13.236720 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-fbbp9" Sep 12 23:56:13.236819 kubelet[2479]: E0912 23:56:13.236742 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-fbbp9" Sep 12 23:56:13.236819 kubelet[2479]: E0912 23:56:13.236795 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-fbbp9_calico-system(a133ea3e-84b0-48ca-a5f9-b58285cab3ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-fbbp9_calico-system(a133ea3e-84b0-48ca-a5f9-b58285cab3ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-fbbp9" podUID="a133ea3e-84b0-48ca-a5f9-b58285cab3ba" Sep 12 23:56:13.241930 containerd[1447]: time="2025-09-12T23:56:13.241894802Z" level=error msg="Failed to destroy network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.243097 containerd[1447]: time="2025-09-12T23:56:13.243063822Z" level=error msg="encountered an error cleaning up failed sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.243152 containerd[1447]: time="2025-09-12T23:56:13.243130701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cc464486f-psq9v,Uid:2afa9b7a-4677-403d-8e2e-d6308cb04db4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.243358 kubelet[2479]: E0912 23:56:13.243323 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.243405 kubelet[2479]: E0912 23:56:13.243375 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cc464486f-psq9v" Sep 12 23:56:13.243405 kubelet[2479]: E0912 23:56:13.243394 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cc464486f-psq9v" Sep 12 23:56:13.243455 kubelet[2479]: E0912 23:56:13.243432 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cc464486f-psq9v_calico-system(2afa9b7a-4677-403d-8e2e-d6308cb04db4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cc464486f-psq9v_calico-system(2afa9b7a-4677-403d-8e2e-d6308cb04db4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cc464486f-psq9v" podUID="2afa9b7a-4677-403d-8e2e-d6308cb04db4" Sep 12 23:56:13.243769 containerd[1447]: time="2025-09-12T23:56:13.243736291Z" level=error msg="Failed to destroy network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.244013 containerd[1447]: time="2025-09-12T23:56:13.243987687Z" level=error msg="encountered an error cleaning up failed sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.244057 containerd[1447]: time="2025-09-12T23:56:13.244031406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-2bsfz,Uid:6e072d2b-0542-4e7a-92e2-10800c8d71d7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.244203 kubelet[2479]: E0912 23:56:13.244177 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.244249 kubelet[2479]: E0912 23:56:13.244218 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbd74567-2bsfz" Sep 12 23:56:13.244249 kubelet[2479]: E0912 23:56:13.244236 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbd74567-2bsfz" Sep 12 23:56:13.244307 kubelet[2479]: E0912 23:56:13.244266 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dbd74567-2bsfz_calico-apiserver(6e072d2b-0542-4e7a-92e2-10800c8d71d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dbd74567-2bsfz_calico-apiserver(6e072d2b-0542-4e7a-92e2-10800c8d71d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbd74567-2bsfz" podUID="6e072d2b-0542-4e7a-92e2-10800c8d71d7" Sep 12 23:56:13.828544 systemd[1]: Created slice kubepods-besteffort-pod2a2755c3_03c7_4f05_b24d_8c93e47436ce.slice - libcontainer container kubepods-besteffort-pod2a2755c3_03c7_4f05_b24d_8c93e47436ce.slice. Sep 12 23:56:13.830701 containerd[1447]: time="2025-09-12T23:56:13.830666318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w5b24,Uid:2a2755c3-03c7-4f05-b24d-8c93e47436ce,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:13.891119 containerd[1447]: time="2025-09-12T23:56:13.890982202Z" level=error msg="Failed to destroy network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.891620 containerd[1447]: time="2025-09-12T23:56:13.891453114Z" level=error msg="encountered an error cleaning up failed sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.891620 containerd[1447]: time="2025-09-12T23:56:13.891506073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w5b24,Uid:2a2755c3-03c7-4f05-b24d-8c93e47436ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.891753 kubelet[2479]: E0912 23:56:13.891714 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.891828 kubelet[2479]: E0912 23:56:13.891784 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w5b24" Sep 12 23:56:13.891828 kubelet[2479]: E0912 23:56:13.891804 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w5b24" Sep 12 23:56:13.891901 kubelet[2479]: E0912 23:56:13.891875 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w5b24_calico-system(2a2755c3-03c7-4f05-b24d-8c93e47436ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w5b24_calico-system(2a2755c3-03c7-4f05-b24d-8c93e47436ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w5b24" podUID="2a2755c3-03c7-4f05-b24d-8c93e47436ce" Sep 12 23:56:13.905346 containerd[1447]: time="2025-09-12T23:56:13.905305605Z" level=info msg="StopPodSandbox for \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\"" Sep 12 23:56:13.906375 containerd[1447]: time="2025-09-12T23:56:13.906130952Z" level=info msg="Ensure that sandbox 289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974 in task-service has been cleanup successfully" Sep 12 23:56:13.910954 kubelet[2479]: I0912 23:56:13.910924 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:13.911055 kubelet[2479]: I0912 23:56:13.910972 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:13.911055 kubelet[2479]: I0912 23:56:13.910985 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:13.911055 kubelet[2479]: I0912 23:56:13.910996 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:13.914594 containerd[1447]: time="2025-09-12T23:56:13.913612268Z" level=info msg="StopPodSandbox for \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\"" Sep 12 23:56:13.915113 containerd[1447]: time="2025-09-12T23:56:13.915085284Z" level=info msg="StopPodSandbox for \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\"" Sep 12 23:56:13.915187 kubelet[2479]: I0912 23:56:13.915157 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:13.916390 containerd[1447]: time="2025-09-12T23:56:13.915238921Z" level=info msg="StopPodSandbox for \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\"" Sep 12 23:56:13.916573 containerd[1447]: time="2025-09-12T23:56:13.916542180Z" level=info msg="Ensure that sandbox d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87 in task-service has been cleanup successfully" Sep 12 23:56:13.916649 containerd[1447]: time="2025-09-12T23:56:13.916626818Z" level=info msg="Ensure that sandbox 6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce in task-service has been cleanup successfully" Sep 12 23:56:13.917194 containerd[1447]: time="2025-09-12T23:56:13.916054548Z" level=info msg="StopPodSandbox for \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\"" Sep 12 23:56:13.917625 kubelet[2479]: I0912 23:56:13.917583 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:13.919345 containerd[1447]: time="2025-09-12T23:56:13.919318374Z" level=info msg="StopPodSandbox for \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\"" Sep 12 23:56:13.920974 containerd[1447]: time="2025-09-12T23:56:13.920935507Z" level=info msg="Ensure that sandbox 4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0 in task-service has been cleanup successfully" Sep 12 23:56:13.923278 kubelet[2479]: I0912 23:56:13.923234 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:13.925469 containerd[1447]: time="2025-09-12T23:56:13.925428913Z" level=info msg="StopPodSandbox for \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\"" Sep 12 23:56:13.925626 containerd[1447]: time="2025-09-12T23:56:13.925600470Z" level=info msg="Ensure that sandbox aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2 in task-service has been cleanup successfully" Sep 12 23:56:13.931493 kubelet[2479]: I0912 23:56:13.931445 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:13.934965 containerd[1447]: time="2025-09-12T23:56:13.934047851Z" level=info msg="StopPodSandbox for \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\"" Sep 12 23:56:13.937199 containerd[1447]: time="2025-09-12T23:56:13.937155799Z" level=info msg="Ensure that sandbox 5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a in task-service has been cleanup successfully" Sep 12 23:56:13.972541 containerd[1447]: time="2025-09-12T23:56:13.972487896Z" level=info msg="Ensure that sandbox 2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170 in task-service has been cleanup successfully" Sep 12 23:56:13.984318 containerd[1447]: time="2025-09-12T23:56:13.984260741Z" level=error msg="StopPodSandbox for \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\" failed" error="failed to destroy network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.986503 kubelet[2479]: E0912 23:56:13.986121 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:13.990280 containerd[1447]: time="2025-09-12T23:56:13.990235283Z" level=info msg="Ensure that sandbox 6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816 in task-service has been cleanup successfully" Sep 12 23:56:13.990950 containerd[1447]: time="2025-09-12T23:56:13.990901352Z" level=error msg="StopPodSandbox for \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\" failed" error="failed to destroy network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:13.991923 kubelet[2479]: E0912 23:56:13.991834 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0"} Sep 12 23:56:13.992007 kubelet[2479]: E0912 23:56:13.991978 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:13.992056 kubelet[2479]: E0912 23:56:13.992039 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974"} Sep 12 23:56:13.992086 kubelet[2479]: E0912 23:56:13.992066 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c36fd1a-8b9e-4673-89de-740f2dd47379\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:13.992153 kubelet[2479]: E0912 23:56:13.992089 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c36fd1a-8b9e-4673-89de-740f2dd47379\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4f7pq" podUID="9c36fd1a-8b9e-4673-89de-740f2dd47379" Sep 12 23:56:13.992153 kubelet[2479]: E0912 23:56:13.991995 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e072d2b-0542-4e7a-92e2-10800c8d71d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:13.992153 kubelet[2479]: E0912 23:56:13.992124 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e072d2b-0542-4e7a-92e2-10800c8d71d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbd74567-2bsfz" podUID="6e072d2b-0542-4e7a-92e2-10800c8d71d7" Sep 12 23:56:13.998786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974-shm.mount: Deactivated successfully. Sep 12 23:56:13.998893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170-shm.mount: Deactivated successfully. Sep 12 23:56:13.998947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87-shm.mount: Deactivated successfully. Sep 12 23:56:14.007079 containerd[1447]: time="2025-09-12T23:56:14.007010130Z" level=error msg="StopPodSandbox for \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\" failed" error="failed to destroy network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:14.007304 kubelet[2479]: E0912 23:56:14.007254 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:14.007373 kubelet[2479]: E0912 23:56:14.007318 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87"} Sep 12 23:56:14.007373 kubelet[2479]: E0912 23:56:14.007353 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ac91082-d88e-4327-ad72-86092b0b92eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:14.007468 kubelet[2479]: E0912 23:56:14.007375 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ac91082-d88e-4327-ad72-86092b0b92eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66cb8fb495-tc6df" podUID="4ac91082-d88e-4327-ad72-86092b0b92eb" Sep 12 23:56:14.017858 containerd[1447]: time="2025-09-12T23:56:14.016380541Z" level=error msg="StopPodSandbox for \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\" failed" error="failed to destroy network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:14.018867 containerd[1447]: time="2025-09-12T23:56:14.018824862Z" level=error msg="StopPodSandbox for \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\" failed" error="failed to destroy network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:14.021088 kubelet[2479]: E0912 23:56:14.020288 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:14.021088 kubelet[2479]: E0912 23:56:14.020337 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a"} Sep 12 23:56:14.021088 kubelet[2479]: E0912 23:56:14.020377 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84ebbe55-2c84-464d-aba6-aefb412ce42b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:14.021088 kubelet[2479]: E0912 23:56:14.020401 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84ebbe55-2c84-464d-aba6-aefb412ce42b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbd74567-jv87r" podUID="84ebbe55-2c84-464d-aba6-aefb412ce42b" Sep 12 23:56:14.021308 kubelet[2479]: E0912 23:56:14.020522 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:14.021308 kubelet[2479]: E0912 23:56:14.020541 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce"} Sep 12 23:56:14.021308 kubelet[2479]: E0912 23:56:14.020564 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2a2755c3-03c7-4f05-b24d-8c93e47436ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:14.021308 kubelet[2479]: E0912 23:56:14.020583 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2a2755c3-03c7-4f05-b24d-8c93e47436ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w5b24" podUID="2a2755c3-03c7-4f05-b24d-8c93e47436ce" Sep 12 23:56:14.027463 containerd[1447]: time="2025-09-12T23:56:14.027419246Z" level=error msg="StopPodSandbox for \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\" failed" error="failed to destroy network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:14.027838 kubelet[2479]: E0912 23:56:14.027793 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:14.027911 kubelet[2479]: E0912 23:56:14.027846 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2"} Sep 12 23:56:14.027911 kubelet[2479]: E0912 23:56:14.027888 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:14.027992 kubelet[2479]: E0912 23:56:14.027908 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cc464486f-psq9v" podUID="2afa9b7a-4677-403d-8e2e-d6308cb04db4" Sep 12 23:56:14.033349 containerd[1447]: time="2025-09-12T23:56:14.033307873Z" level=error msg="StopPodSandbox for \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\" failed" error="failed to destroy network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:14.033709 kubelet[2479]: E0912 23:56:14.033662 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:14.033790 kubelet[2479]: E0912 23:56:14.033720 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816"} Sep 12 23:56:14.033834 kubelet[2479]: E0912 23:56:14.033751 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a133ea3e-84b0-48ca-a5f9-b58285cab3ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:14.033834 kubelet[2479]: E0912 23:56:14.033812 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a133ea3e-84b0-48ca-a5f9-b58285cab3ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-fbbp9" podUID="a133ea3e-84b0-48ca-a5f9-b58285cab3ba" Sep 12 23:56:14.045344 containerd[1447]: time="2025-09-12T23:56:14.045298002Z" level=error msg="StopPodSandbox for \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\" failed" error="failed to destroy network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:14.045568 kubelet[2479]: E0912 23:56:14.045526 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:14.045624 kubelet[2479]: E0912 23:56:14.045585 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170"} Sep 12 23:56:14.045654 kubelet[2479]: E0912 23:56:14.045620 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a49c785-0fd4-496d-8891-33121806033d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:14.045654 kubelet[2479]: E0912 23:56:14.045640 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a49c785-0fd4-496d-8891-33121806033d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ngcpg" podUID="6a49c785-0fd4-496d-8891-33121806033d" Sep 12 23:56:15.763184 kubelet[2479]: I0912 23:56:15.763130 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:56:15.763833 kubelet[2479]: E0912 23:56:15.763435 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:15.939637 kubelet[2479]: E0912 23:56:15.939182 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:16.708693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047916366.mount: Deactivated successfully. Sep 12 23:56:17.070373 containerd[1447]: time="2025-09-12T23:56:17.070324039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:17.071446 containerd[1447]: time="2025-09-12T23:56:17.071231026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 23:56:17.072367 containerd[1447]: time="2025-09-12T23:56:17.072329170Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:17.074561 containerd[1447]: time="2025-09-12T23:56:17.074497300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:17.076741 containerd[1447]: time="2025-09-12T23:56:17.076559390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.172634421s" Sep 12 23:56:17.076741 containerd[1447]: time="2025-09-12T23:56:17.076627270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 23:56:17.083476 containerd[1447]: time="2025-09-12T23:56:17.083434173Z" level=info msg="CreateContainer within sandbox \"6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:56:17.108098 containerd[1447]: time="2025-09-12T23:56:17.108011425Z" level=info msg="CreateContainer within sandbox \"6a8d338af7cb0f93ad24fda7ae74ad1229fd2aea497c5be8a813ed50fc5d7e72\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"02f736256d4dc2ab3474bf19bf3ee5a81932f1c3b2294a698f0334618d331b1c\"" Sep 12 23:56:17.108781 containerd[1447]: time="2025-09-12T23:56:17.108679376Z" level=info msg="StartContainer for \"02f736256d4dc2ab3474bf19bf3ee5a81932f1c3b2294a698f0334618d331b1c\"" Sep 12 23:56:17.161922 systemd[1]: Started cri-containerd-02f736256d4dc2ab3474bf19bf3ee5a81932f1c3b2294a698f0334618d331b1c.scope - libcontainer container 02f736256d4dc2ab3474bf19bf3ee5a81932f1c3b2294a698f0334618d331b1c. Sep 12 23:56:17.186516 containerd[1447]: time="2025-09-12T23:56:17.186473315Z" level=info msg="StartContainer for \"02f736256d4dc2ab3474bf19bf3ee5a81932f1c3b2294a698f0334618d331b1c\" returns successfully" Sep 12 23:56:17.333924 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:56:17.334067 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:56:17.434567 containerd[1447]: time="2025-09-12T23:56:17.434512766Z" level=info msg="StopPodSandbox for \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\"" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.528 [INFO][3769] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.530 [INFO][3769] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" iface="eth0" netns="/var/run/netns/cni-fa2e0b0a-c9d6-0354-b99d-b63808df7211" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.530 [INFO][3769] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" iface="eth0" netns="/var/run/netns/cni-fa2e0b0a-c9d6-0354-b99d-b63808df7211" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.531 [INFO][3769] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" iface="eth0" netns="/var/run/netns/cni-fa2e0b0a-c9d6-0354-b99d-b63808df7211" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.531 [INFO][3769] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.531 [INFO][3769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.593 [INFO][3780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.593 [INFO][3780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.593 [INFO][3780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.604 [WARNING][3780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.604 [INFO][3780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.606 [INFO][3780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:17.610243 containerd[1447]: 2025-09-12 23:56:17.608 [INFO][3769] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:17.610636 containerd[1447]: time="2025-09-12T23:56:17.610379677Z" level=info msg="TearDown network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\" successfully" Sep 12 23:56:17.610636 containerd[1447]: time="2025-09-12T23:56:17.610405197Z" level=info msg="StopPodSandbox for \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\" returns successfully" Sep 12 23:56:17.710073 systemd[1]: run-netns-cni\x2dfa2e0b0a\x2dc9d6\x2d0354\x2db99d\x2db63808df7211.mount: Deactivated successfully. Sep 12 23:56:17.740052 kubelet[2479]: I0912 23:56:17.739672 2479 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-ca-bundle\") pod \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\" (UID: \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\") " Sep 12 23:56:17.740052 kubelet[2479]: I0912 23:56:17.739746 2479 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-backend-key-pair\") pod \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\" (UID: \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\") " Sep 12 23:56:17.740052 kubelet[2479]: I0912 23:56:17.739799 2479 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9fpx\" (UniqueName: \"kubernetes.io/projected/2afa9b7a-4677-403d-8e2e-d6308cb04db4-kube-api-access-n9fpx\") pod \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\" (UID: \"2afa9b7a-4677-403d-8e2e-d6308cb04db4\") " Sep 12 23:56:17.747796 kubelet[2479]: I0912 23:56:17.747578 2479 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2afa9b7a-4677-403d-8e2e-d6308cb04db4" (UID: "2afa9b7a-4677-403d-8e2e-d6308cb04db4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:56:17.761035 systemd[1]: var-lib-kubelet-pods-2afa9b7a\x2d4677\x2d403d\x2d8e2e\x2dd6308cb04db4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn9fpx.mount: Deactivated successfully. Sep 12 23:56:17.761134 systemd[1]: var-lib-kubelet-pods-2afa9b7a\x2d4677\x2d403d\x2d8e2e\x2dd6308cb04db4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 23:56:17.761879 kubelet[2479]: I0912 23:56:17.761834 2479 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2afa9b7a-4677-403d-8e2e-d6308cb04db4-kube-api-access-n9fpx" (OuterVolumeSpecName: "kube-api-access-n9fpx") pod "2afa9b7a-4677-403d-8e2e-d6308cb04db4" (UID: "2afa9b7a-4677-403d-8e2e-d6308cb04db4"). InnerVolumeSpecName "kube-api-access-n9fpx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:56:17.761966 kubelet[2479]: I0912 23:56:17.761877 2479 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2afa9b7a-4677-403d-8e2e-d6308cb04db4" (UID: "2afa9b7a-4677-403d-8e2e-d6308cb04db4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:56:17.840235 kubelet[2479]: I0912 23:56:17.840183 2479 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 23:56:17.840235 kubelet[2479]: I0912 23:56:17.840219 2479 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2afa9b7a-4677-403d-8e2e-d6308cb04db4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 23:56:17.840235 kubelet[2479]: I0912 23:56:17.840228 2479 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n9fpx\" (UniqueName: \"kubernetes.io/projected/2afa9b7a-4677-403d-8e2e-d6308cb04db4-kube-api-access-n9fpx\") on node \"localhost\" DevicePath \"\"" Sep 12 23:56:18.077285 systemd[1]: Removed slice kubepods-besteffort-pod2afa9b7a_4677_403d_8e2e_d6308cb04db4.slice - libcontainer container kubepods-besteffort-pod2afa9b7a_4677_403d_8e2e_d6308cb04db4.slice. Sep 12 23:56:18.091243 kubelet[2479]: I0912 23:56:18.091062 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tq8pp" podStartSLOduration=1.817231933 podStartE2EDuration="12.091045281s" podCreationTimestamp="2025-09-12 23:56:06 +0000 UTC" firstStartedPulling="2025-09-12 23:56:06.803484432 +0000 UTC m=+20.067715671" lastFinishedPulling="2025-09-12 23:56:17.07729778 +0000 UTC m=+30.341529019" observedRunningTime="2025-09-12 23:56:18.089299065 +0000 UTC m=+31.353530304" watchObservedRunningTime="2025-09-12 23:56:18.091045281 +0000 UTC m=+31.355276520" Sep 12 23:56:18.160876 systemd[1]: Created slice kubepods-besteffort-pod172ce775_c0f5_463e_9de8_e51bdb0c8f87.slice - libcontainer container kubepods-besteffort-pod172ce775_c0f5_463e_9de8_e51bdb0c8f87.slice. Sep 12 23:56:18.245343 kubelet[2479]: I0912 23:56:18.245284 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/172ce775-c0f5-463e-9de8-e51bdb0c8f87-whisker-backend-key-pair\") pod \"whisker-57d44d686d-mtkdf\" (UID: \"172ce775-c0f5-463e-9de8-e51bdb0c8f87\") " pod="calico-system/whisker-57d44d686d-mtkdf" Sep 12 23:56:18.245343 kubelet[2479]: I0912 23:56:18.245343 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/172ce775-c0f5-463e-9de8-e51bdb0c8f87-whisker-ca-bundle\") pod \"whisker-57d44d686d-mtkdf\" (UID: \"172ce775-c0f5-463e-9de8-e51bdb0c8f87\") " pod="calico-system/whisker-57d44d686d-mtkdf" Sep 12 23:56:18.245517 kubelet[2479]: I0912 23:56:18.245364 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnqb\" (UniqueName: \"kubernetes.io/projected/172ce775-c0f5-463e-9de8-e51bdb0c8f87-kube-api-access-mpnqb\") pod \"whisker-57d44d686d-mtkdf\" (UID: \"172ce775-c0f5-463e-9de8-e51bdb0c8f87\") " pod="calico-system/whisker-57d44d686d-mtkdf" Sep 12 23:56:18.468810 containerd[1447]: time="2025-09-12T23:56:18.468367172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d44d686d-mtkdf,Uid:172ce775-c0f5-463e-9de8-e51bdb0c8f87,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:18.622300 systemd-networkd[1374]: cali043a49aec94: Link UP Sep 12 23:56:18.622685 systemd-networkd[1374]: cali043a49aec94: Gained carrier Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.512 [INFO][3804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.543 [INFO][3804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57d44d686d--mtkdf-eth0 whisker-57d44d686d- calico-system 172ce775-c0f5-463e-9de8-e51bdb0c8f87 884 0 2025-09-12 23:56:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57d44d686d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57d44d686d-mtkdf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali043a49aec94 [] [] }} ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.543 [INFO][3804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.572 [INFO][3817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" HandleID="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Workload="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.572 [INFO][3817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" HandleID="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Workload="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57d44d686d-mtkdf", "timestamp":"2025-09-12 23:56:18.572336993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.572 [INFO][3817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.572 [INFO][3817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.572 [INFO][3817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.584 [INFO][3817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.592 [INFO][3817] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.598 [INFO][3817] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.601 [INFO][3817] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.603 [INFO][3817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.603 [INFO][3817] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.604 [INFO][3817] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3 Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.609 [INFO][3817] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.613 [INFO][3817] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.613 [INFO][3817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" host="localhost" Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.614 [INFO][3817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:18.639961 containerd[1447]: 2025-09-12 23:56:18.614 [INFO][3817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" HandleID="k8s-pod-network.711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Workload="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" Sep 12 23:56:18.640683 containerd[1447]: 2025-09-12 23:56:18.616 [INFO][3804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57d44d686d--mtkdf-eth0", GenerateName:"whisker-57d44d686d-", Namespace:"calico-system", SelfLink:"", UID:"172ce775-c0f5-463e-9de8-e51bdb0c8f87", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d44d686d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57d44d686d-mtkdf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali043a49aec94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:18.640683 containerd[1447]: 2025-09-12 23:56:18.616 [INFO][3804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" Sep 12 23:56:18.640683 containerd[1447]: 2025-09-12 23:56:18.616 [INFO][3804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali043a49aec94 ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" Sep 12 23:56:18.640683 containerd[1447]: 2025-09-12 23:56:18.623 [INFO][3804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" Sep 12 23:56:18.640683 containerd[1447]: 2025-09-12 23:56:18.623 [INFO][3804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57d44d686d--mtkdf-eth0", GenerateName:"whisker-57d44d686d-", Namespace:"calico-system", SelfLink:"", UID:"172ce775-c0f5-463e-9de8-e51bdb0c8f87", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d44d686d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3", Pod:"whisker-57d44d686d-mtkdf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali043a49aec94", MAC:"e2:b0:60:02:7b:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:18.640683 containerd[1447]: 2025-09-12 23:56:18.634 [INFO][3804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3" Namespace="calico-system" Pod="whisker-57d44d686d-mtkdf" WorkloadEndpoint="localhost-k8s-whisker--57d44d686d--mtkdf-eth0" Sep 12 23:56:18.657300 containerd[1447]: time="2025-09-12T23:56:18.657092797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:18.657300 containerd[1447]: time="2025-09-12T23:56:18.657157476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:18.657300 containerd[1447]: time="2025-09-12T23:56:18.657168715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:18.657300 containerd[1447]: time="2025-09-12T23:56:18.657252954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:18.674965 systemd[1]: Started cri-containerd-711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3.scope - libcontainer container 711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3. Sep 12 23:56:18.685618 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:18.702849 containerd[1447]: time="2025-09-12T23:56:18.702811533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d44d686d-mtkdf,Uid:172ce775-c0f5-463e-9de8-e51bdb0c8f87,Namespace:calico-system,Attempt:0,} returns sandbox id \"711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3\"" Sep 12 23:56:18.704232 containerd[1447]: time="2025-09-12T23:56:18.704208594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 23:56:18.824134 kubelet[2479]: I0912 23:56:18.824082 2479 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2afa9b7a-4677-403d-8e2e-d6308cb04db4" path="/var/lib/kubelet/pods/2afa9b7a-4677-403d-8e2e-d6308cb04db4/volumes" Sep 12 23:56:18.946788 kernel: bpftool[4002]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 23:56:19.060541 kubelet[2479]: I0912 23:56:19.060424 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:56:19.119098 systemd-networkd[1374]: vxlan.calico: Link UP Sep 12 23:56:19.119104 systemd-networkd[1374]: vxlan.calico: Gained carrier Sep 12 23:56:19.792166 containerd[1447]: time="2025-09-12T23:56:19.792113559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:19.792734 containerd[1447]: time="2025-09-12T23:56:19.792669872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 23:56:19.793716 containerd[1447]: time="2025-09-12T23:56:19.793689899Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:19.796456 containerd[1447]: time="2025-09-12T23:56:19.796189026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:19.796966 containerd[1447]: time="2025-09-12T23:56:19.796937256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.092684103s" Sep 12 23:56:19.797177 containerd[1447]: time="2025-09-12T23:56:19.797044214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 23:56:19.801165 containerd[1447]: time="2025-09-12T23:56:19.801068881Z" level=info msg="CreateContainer within sandbox \"711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 23:56:19.824175 containerd[1447]: time="2025-09-12T23:56:19.824108418Z" level=info msg="CreateContainer within sandbox \"711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"822543bcfae6fef2299a7a3ea709747ed55a4baedd36f391454bbf214bb84e7a\"" Sep 12 23:56:19.824927 containerd[1447]: time="2025-09-12T23:56:19.824846768Z" level=info msg="StartContainer for \"822543bcfae6fef2299a7a3ea709747ed55a4baedd36f391454bbf214bb84e7a\"" Sep 12 23:56:19.858956 systemd[1]: Started cri-containerd-822543bcfae6fef2299a7a3ea709747ed55a4baedd36f391454bbf214bb84e7a.scope - libcontainer container 822543bcfae6fef2299a7a3ea709747ed55a4baedd36f391454bbf214bb84e7a. Sep 12 23:56:19.901152 containerd[1447]: time="2025-09-12T23:56:19.901104363Z" level=info msg="StartContainer for \"822543bcfae6fef2299a7a3ea709747ed55a4baedd36f391454bbf214bb84e7a\" returns successfully" Sep 12 23:56:19.905502 containerd[1447]: time="2025-09-12T23:56:19.905375547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 23:56:20.418317 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Sep 12 23:56:20.610399 systemd-networkd[1374]: cali043a49aec94: Gained IPv6LL Sep 12 23:56:20.674849 kubelet[2479]: I0912 23:56:20.674701 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:56:21.660677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167813604.mount: Deactivated successfully. Sep 12 23:56:21.709833 containerd[1447]: time="2025-09-12T23:56:21.708170758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:21.710566 containerd[1447]: time="2025-09-12T23:56:21.710519249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 23:56:21.713622 containerd[1447]: time="2025-09-12T23:56:21.713584691Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:21.733824 containerd[1447]: time="2025-09-12T23:56:21.733707363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:21.734809 containerd[1447]: time="2025-09-12T23:56:21.734751230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.829342523s" Sep 12 23:56:21.734809 containerd[1447]: time="2025-09-12T23:56:21.734807910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 23:56:21.741506 containerd[1447]: time="2025-09-12T23:56:21.741225511Z" level=info msg="CreateContainer within sandbox \"711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 23:56:21.764226 containerd[1447]: time="2025-09-12T23:56:21.764167748Z" level=info msg="CreateContainer within sandbox \"711a0e10e2000707d756790e6293bcebc9534ada60d4ef20e5c517bc4a337ad3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"43f4cc2c944820073cc708a660952278419b791c555e20b77a78d5c3bc348d7e\"" Sep 12 23:56:21.764765 containerd[1447]: time="2025-09-12T23:56:21.764729821Z" level=info msg="StartContainer for \"43f4cc2c944820073cc708a660952278419b791c555e20b77a78d5c3bc348d7e\"" Sep 12 23:56:21.797908 systemd[1]: Started cri-containerd-43f4cc2c944820073cc708a660952278419b791c555e20b77a78d5c3bc348d7e.scope - libcontainer container 43f4cc2c944820073cc708a660952278419b791c555e20b77a78d5c3bc348d7e. Sep 12 23:56:21.845254 containerd[1447]: time="2025-09-12T23:56:21.845195150Z" level=info msg="StartContainer for \"43f4cc2c944820073cc708a660952278419b791c555e20b77a78d5c3bc348d7e\" returns successfully" Sep 12 23:56:24.822081 containerd[1447]: time="2025-09-12T23:56:24.821711390Z" level=info msg="StopPodSandbox for \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\"" Sep 12 23:56:24.881700 kubelet[2479]: I0912 23:56:24.881629 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57d44d686d-mtkdf" podStartSLOduration=3.848141157 podStartE2EDuration="6.881610437s" podCreationTimestamp="2025-09-12 23:56:18 +0000 UTC" firstStartedPulling="2025-09-12 23:56:18.703884038 +0000 UTC m=+31.968115277" lastFinishedPulling="2025-09-12 23:56:21.737353318 +0000 UTC m=+35.001584557" observedRunningTime="2025-09-12 23:56:22.106230534 +0000 UTC m=+35.370461773" watchObservedRunningTime="2025-09-12 23:56:24.881610437 +0000 UTC m=+38.145841676" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.879 [INFO][4236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.879 [INFO][4236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" iface="eth0" netns="/var/run/netns/cni-4fdefe71-ee82-63f2-d7ab-9a981b387078" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.879 [INFO][4236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" iface="eth0" netns="/var/run/netns/cni-4fdefe71-ee82-63f2-d7ab-9a981b387078" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.880 [INFO][4236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" iface="eth0" netns="/var/run/netns/cni-4fdefe71-ee82-63f2-d7ab-9a981b387078" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.880 [INFO][4236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.880 [INFO][4236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.920 [INFO][4245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.920 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.920 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.928 [WARNING][4245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.928 [INFO][4245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.930 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:24.934095 containerd[1447]: 2025-09-12 23:56:24.932 [INFO][4236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:24.936568 systemd[1]: run-netns-cni\x2d4fdefe71\x2dee82\x2d63f2\x2dd7ab\x2d9a981b387078.mount: Deactivated successfully. Sep 12 23:56:24.937290 containerd[1447]: time="2025-09-12T23:56:24.937116054Z" level=info msg="TearDown network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\" successfully" Sep 12 23:56:24.937290 containerd[1447]: time="2025-09-12T23:56:24.937147894Z" level=info msg="StopPodSandbox for \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\" returns successfully" Sep 12 23:56:24.938283 kubelet[2479]: E0912 23:56:24.938082 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:24.938646 containerd[1447]: time="2025-09-12T23:56:24.938612317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f7pq,Uid:9c36fd1a-8b9e-4673-89de-740f2dd47379,Namespace:kube-system,Attempt:1,}" Sep 12 23:56:25.124745 systemd-networkd[1374]: cali4a117d80538: Link UP Sep 12 23:56:25.125457 systemd-networkd[1374]: cali4a117d80538: Gained carrier Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.043 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0 coredns-668d6bf9bc- kube-system 9c36fd1a-8b9e-4673-89de-740f2dd47379 923 0 2025-09-12 23:55:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4f7pq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a117d80538 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.043 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.069 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" HandleID="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.069 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" HandleID="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4f7pq", "timestamp":"2025-09-12 23:56:25.069582989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.069 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.069 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.069 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.087 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.096 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.100 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.102 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.105 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.105 [INFO][4268] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.107 [INFO][4268] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6 Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.111 [INFO][4268] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.117 [INFO][4268] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.117 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" host="localhost" Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.117 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:25.142898 containerd[1447]: 2025-09-12 23:56:25.117 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" HandleID="k8s-pod-network.74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:25.143479 containerd[1447]: 2025-09-12 23:56:25.119 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c36fd1a-8b9e-4673-89de-740f2dd47379", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4f7pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a117d80538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:25.143479 containerd[1447]: 2025-09-12 23:56:25.119 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:25.143479 containerd[1447]: 2025-09-12 23:56:25.119 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a117d80538 ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:25.143479 containerd[1447]: 2025-09-12 23:56:25.123 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:25.143479 containerd[1447]: 2025-09-12 23:56:25.124 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c36fd1a-8b9e-4673-89de-740f2dd47379", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6", Pod:"coredns-668d6bf9bc-4f7pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a117d80538", MAC:"be:19:5c:fd:45:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:25.143479 containerd[1447]: 2025-09-12 23:56:25.137 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f7pq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:25.190060 containerd[1447]: time="2025-09-12T23:56:25.189768637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:25.190060 containerd[1447]: time="2025-09-12T23:56:25.189826237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:25.190060 containerd[1447]: time="2025-09-12T23:56:25.189860516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:25.190060 containerd[1447]: time="2025-09-12T23:56:25.189951595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:25.222953 systemd[1]: Started cri-containerd-74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6.scope - libcontainer container 74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6. Sep 12 23:56:25.236129 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:25.255220 containerd[1447]: time="2025-09-12T23:56:25.255169644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f7pq,Uid:9c36fd1a-8b9e-4673-89de-740f2dd47379,Namespace:kube-system,Attempt:1,} returns sandbox id \"74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6\"" Sep 12 23:56:25.256046 kubelet[2479]: E0912 23:56:25.255999 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:25.258003 containerd[1447]: time="2025-09-12T23:56:25.257973693Z" level=info msg="CreateContainer within sandbox \"74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:56:25.275268 containerd[1447]: time="2025-09-12T23:56:25.275216225Z" level=info msg="CreateContainer within sandbox \"74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db8467c179bc1f4077b72b9b2cbe5c168f4d0fe40d1ca855bb3d3c36b863a85d\"" Sep 12 23:56:25.276124 containerd[1447]: time="2025-09-12T23:56:25.276092936Z" level=info msg="StartContainer for \"db8467c179bc1f4077b72b9b2cbe5c168f4d0fe40d1ca855bb3d3c36b863a85d\"" Sep 12 23:56:25.298928 systemd[1]: Started cri-containerd-db8467c179bc1f4077b72b9b2cbe5c168f4d0fe40d1ca855bb3d3c36b863a85d.scope - libcontainer container db8467c179bc1f4077b72b9b2cbe5c168f4d0fe40d1ca855bb3d3c36b863a85d. Sep 12 23:56:25.333366 containerd[1447]: time="2025-09-12T23:56:25.332946435Z" level=info msg="StartContainer for \"db8467c179bc1f4077b72b9b2cbe5c168f4d0fe40d1ca855bb3d3c36b863a85d\" returns successfully" Sep 12 23:56:25.824917 containerd[1447]: time="2025-09-12T23:56:25.822432895Z" level=info msg="StopPodSandbox for \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\"" Sep 12 23:56:25.824917 containerd[1447]: time="2025-09-12T23:56:25.822451375Z" level=info msg="StopPodSandbox for \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\"" Sep 12 23:56:25.939603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount191710553.mount: Deactivated successfully. Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.890 [INFO][4388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.890 [INFO][4388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" iface="eth0" netns="/var/run/netns/cni-1d469ea8-a48c-f7bd-9e58-c81bf74b39b9" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.890 [INFO][4388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" iface="eth0" netns="/var/run/netns/cni-1d469ea8-a48c-f7bd-9e58-c81bf74b39b9" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.891 [INFO][4388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" iface="eth0" netns="/var/run/netns/cni-1d469ea8-a48c-f7bd-9e58-c81bf74b39b9" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.891 [INFO][4388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.891 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.943 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.943 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.943 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.953 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.953 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.954 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:25.961128 containerd[1447]: 2025-09-12 23:56:25.956 [INFO][4388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:25.961128 containerd[1447]: time="2025-09-12T23:56:25.959801156Z" level=info msg="TearDown network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\" successfully" Sep 12 23:56:25.961128 containerd[1447]: time="2025-09-12T23:56:25.959827356Z" level=info msg="StopPodSandbox for \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\" returns successfully" Sep 12 23:56:25.961128 containerd[1447]: time="2025-09-12T23:56:25.960484869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ngcpg,Uid:6a49c785-0fd4-496d-8891-33121806033d,Namespace:kube-system,Attempt:1,}" Sep 12 23:56:25.960792 systemd[1]: run-netns-cni\x2d1d469ea8\x2da48c\x2df7bd\x2d9e58\x2dc81bf74b39b9.mount: Deactivated successfully. Sep 12 23:56:25.961618 kubelet[2479]: E0912 23:56:25.960140 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.902 [INFO][4389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.903 [INFO][4389] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" iface="eth0" netns="/var/run/netns/cni-83fee2a2-81e8-5612-365b-eae6559846a4" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.903 [INFO][4389] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" iface="eth0" netns="/var/run/netns/cni-83fee2a2-81e8-5612-365b-eae6559846a4" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.904 [INFO][4389] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" iface="eth0" netns="/var/run/netns/cni-83fee2a2-81e8-5612-365b-eae6559846a4" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.904 [INFO][4389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.904 [INFO][4389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.952 [INFO][4410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.952 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.954 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.965 [WARNING][4410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.965 [INFO][4410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.969 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:25.973972 containerd[1447]: 2025-09-12 23:56:25.971 [INFO][4389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:25.976252 systemd[1]: run-netns-cni\x2d83fee2a2\x2d81e8\x2d5612\x2d365b\x2deae6559846a4.mount: Deactivated successfully. Sep 12 23:56:25.976535 containerd[1447]: time="2025-09-12T23:56:25.976304776Z" level=info msg="TearDown network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\" successfully" Sep 12 23:56:25.976535 containerd[1447]: time="2025-09-12T23:56:25.976331896Z" level=info msg="StopPodSandbox for \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\" returns successfully" Sep 12 23:56:25.977351 containerd[1447]: time="2025-09-12T23:56:25.977316605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w5b24,Uid:2a2755c3-03c7-4f05-b24d-8c93e47436ce,Namespace:calico-system,Attempt:1,}" Sep 12 23:56:26.090624 kubelet[2479]: E0912 23:56:26.090512 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:26.101409 systemd-networkd[1374]: cali4c2e1d0ed3b: Link UP Sep 12 23:56:26.101618 systemd-networkd[1374]: cali4c2e1d0ed3b: Gained carrier Sep 12 23:56:26.106867 kubelet[2479]: I0912 23:56:26.105537 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4f7pq" podStartSLOduration=33.105520638 podStartE2EDuration="33.105520638s" podCreationTimestamp="2025-09-12 23:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:56:26.105199601 +0000 UTC m=+39.369430840" watchObservedRunningTime="2025-09-12 23:56:26.105520638 +0000 UTC m=+39.369751877" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.016 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0 coredns-668d6bf9bc- kube-system 6a49c785-0fd4-496d-8891-33121806033d 937 0 2025-09-12 23:55:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ngcpg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c2e1d0ed3b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.016 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.048 [INFO][4449] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" HandleID="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.048 [INFO][4449] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" HandleID="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001174e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ngcpg", "timestamp":"2025-09-12 23:56:26.048600442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.048 [INFO][4449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.048 [INFO][4449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.048 [INFO][4449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.059 [INFO][4449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.066 [INFO][4449] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.070 [INFO][4449] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.072 [INFO][4449] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.074 [INFO][4449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.074 [INFO][4449] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.076 [INFO][4449] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.080 [INFO][4449] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.087 [INFO][4449] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.087 [INFO][4449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" host="localhost" Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.087 [INFO][4449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:26.119536 containerd[1447]: 2025-09-12 23:56:26.087 [INFO][4449] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" HandleID="k8s-pod-network.4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:26.120243 containerd[1447]: 2025-09-12 23:56:26.094 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a49c785-0fd4-496d-8891-33121806033d", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ngcpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c2e1d0ed3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:26.120243 containerd[1447]: 2025-09-12 23:56:26.094 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:26.120243 containerd[1447]: 2025-09-12 23:56:26.094 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c2e1d0ed3b ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:26.120243 containerd[1447]: 2025-09-12 23:56:26.097 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:26.120243 containerd[1447]: 2025-09-12 23:56:26.098 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a49c785-0fd4-496d-8891-33121806033d", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b", Pod:"coredns-668d6bf9bc-ngcpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c2e1d0ed3b", MAC:"e2:52:82:7d:0a:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:26.120243 containerd[1447]: 2025-09-12 23:56:26.116 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ngcpg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:26.144736 containerd[1447]: time="2025-09-12T23:56:26.144216587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:26.144736 containerd[1447]: time="2025-09-12T23:56:26.144706862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:26.144736 containerd[1447]: time="2025-09-12T23:56:26.144720582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:26.144979 containerd[1447]: time="2025-09-12T23:56:26.144937180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:26.161944 systemd[1]: Started cri-containerd-4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b.scope - libcontainer container 4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b. Sep 12 23:56:26.182278 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:26.209299 systemd-networkd[1374]: cali4751d6d8ff1: Link UP Sep 12 23:56:26.210074 systemd-networkd[1374]: cali4751d6d8ff1: Gained carrier Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.028 [INFO][4434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w5b24-eth0 csi-node-driver- calico-system 2a2755c3-03c7-4f05-b24d-8c93e47436ce 938 0 2025-09-12 23:56:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w5b24 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4751d6d8ff1 [] [] }} ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.029 [INFO][4434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.057 [INFO][4455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" HandleID="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.058 [INFO][4455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" HandleID="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w5b24", "timestamp":"2025-09-12 23:56:26.056981953 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.058 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.087 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.088 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.167 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.175 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.180 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.183 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.188 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.188 [INFO][4455] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.190 [INFO][4455] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482 Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.194 [INFO][4455] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.204 [INFO][4455] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.204 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" host="localhost" Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.204 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:26.233873 containerd[1447]: 2025-09-12 23:56:26.204 [INFO][4455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" HandleID="k8s-pod-network.60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:26.234769 containerd[1447]: 2025-09-12 23:56:26.206 [INFO][4434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w5b24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a2755c3-03c7-4f05-b24d-8c93e47436ce", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w5b24", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4751d6d8ff1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:26.234769 containerd[1447]: 2025-09-12 23:56:26.206 [INFO][4434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:26.234769 containerd[1447]: 2025-09-12 23:56:26.206 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4751d6d8ff1 ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:26.234769 containerd[1447]: 2025-09-12 23:56:26.210 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:26.234769 containerd[1447]: 2025-09-12 23:56:26.210 [INFO][4434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w5b24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a2755c3-03c7-4f05-b24d-8c93e47436ce", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482", Pod:"csi-node-driver-w5b24", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4751d6d8ff1", MAC:"f2:41:67:76:3f:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:26.234769 containerd[1447]: 2025-09-12 23:56:26.226 [INFO][4434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482" Namespace="calico-system" Pod="csi-node-driver-w5b24" WorkloadEndpoint="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:26.247849 containerd[1447]: time="2025-09-12T23:56:26.247788809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ngcpg,Uid:6a49c785-0fd4-496d-8891-33121806033d,Namespace:kube-system,Attempt:1,} returns sandbox id \"4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b\"" Sep 12 23:56:26.249135 kubelet[2479]: E0912 23:56:26.248881 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:26.252975 containerd[1447]: time="2025-09-12T23:56:26.252922594Z" level=info msg="CreateContainer within sandbox \"4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:56:26.263532 containerd[1447]: time="2025-09-12T23:56:26.257719303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:26.263532 containerd[1447]: time="2025-09-12T23:56:26.262983327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:26.263532 containerd[1447]: time="2025-09-12T23:56:26.263004727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:26.263532 containerd[1447]: time="2025-09-12T23:56:26.263186805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:26.269806 containerd[1447]: time="2025-09-12T23:56:26.269750976Z" level=info msg="CreateContainer within sandbox \"4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdb5ee98985b6625cd2eae13bd554f744be16763c7e1cb704977311983c0133b\"" Sep 12 23:56:26.270530 containerd[1447]: time="2025-09-12T23:56:26.270506288Z" level=info msg="StartContainer for \"cdb5ee98985b6625cd2eae13bd554f744be16763c7e1cb704977311983c0133b\"" Sep 12 23:56:26.284944 systemd[1]: Started cri-containerd-60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482.scope - libcontainer container 60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482. Sep 12 23:56:26.293255 systemd[1]: Started cri-containerd-cdb5ee98985b6625cd2eae13bd554f744be16763c7e1cb704977311983c0133b.scope - libcontainer container cdb5ee98985b6625cd2eae13bd554f744be16763c7e1cb704977311983c0133b. Sep 12 23:56:26.296846 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:26.317546 containerd[1447]: time="2025-09-12T23:56:26.317510029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w5b24,Uid:2a2755c3-03c7-4f05-b24d-8c93e47436ce,Namespace:calico-system,Attempt:1,} returns sandbox id \"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482\"" Sep 12 23:56:26.318986 containerd[1447]: time="2025-09-12T23:56:26.318958293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:56:26.328462 containerd[1447]: time="2025-09-12T23:56:26.328410473Z" level=info msg="StartContainer for \"cdb5ee98985b6625cd2eae13bd554f744be16763c7e1cb704977311983c0133b\" returns successfully" Sep 12 23:56:26.755105 systemd-networkd[1374]: cali4a117d80538: Gained IPv6LL Sep 12 23:56:26.823440 containerd[1447]: time="2025-09-12T23:56:26.823299422Z" level=info msg="StopPodSandbox for \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\"" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.883 [INFO][4621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.883 [INFO][4621] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" iface="eth0" netns="/var/run/netns/cni-30ba1ee3-1559-69f2-41b5-d30e512abc26" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.883 [INFO][4621] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" iface="eth0" netns="/var/run/netns/cni-30ba1ee3-1559-69f2-41b5-d30e512abc26" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.884 [INFO][4621] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" iface="eth0" netns="/var/run/netns/cni-30ba1ee3-1559-69f2-41b5-d30e512abc26" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.884 [INFO][4621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.884 [INFO][4621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.909 [INFO][4629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.909 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.909 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.920 [WARNING][4629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.920 [INFO][4629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.923 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:26.928958 containerd[1447]: 2025-09-12 23:56:26.925 [INFO][4621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:26.929718 containerd[1447]: time="2025-09-12T23:56:26.929595654Z" level=info msg="TearDown network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\" successfully" Sep 12 23:56:26.929718 containerd[1447]: time="2025-09-12T23:56:26.929627014Z" level=info msg="StopPodSandbox for \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\" returns successfully" Sep 12 23:56:26.930257 containerd[1447]: time="2025-09-12T23:56:26.930224928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb8fb495-tc6df,Uid:4ac91082-d88e-4327-ad72-86092b0b92eb,Namespace:calico-system,Attempt:1,}" Sep 12 23:56:26.942153 systemd[1]: run-netns-cni\x2d30ba1ee3\x2d1559\x2d69f2\x2d41b5\x2dd30e512abc26.mount: Deactivated successfully. Sep 12 23:56:27.096683 kubelet[2479]: E0912 23:56:27.096646 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:27.100051 kubelet[2479]: E0912 23:56:27.099731 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:27.116227 systemd-networkd[1374]: calif9467aaade1: Link UP Sep 12 23:56:27.118963 systemd-networkd[1374]: calif9467aaade1: Gained carrier Sep 12 23:56:27.124383 kubelet[2479]: I0912 23:56:27.123279 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ngcpg" podStartSLOduration=34.123259514 podStartE2EDuration="34.123259514s" podCreationTimestamp="2025-09-12 23:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:56:27.120829259 +0000 UTC m=+40.385060498" watchObservedRunningTime="2025-09-12 23:56:27.123259514 +0000 UTC m=+40.387490753" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.010 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0 calico-kube-controllers-66cb8fb495- calico-system 4ac91082-d88e-4327-ad72-86092b0b92eb 964 0 2025-09-12 23:56:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66cb8fb495 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66cb8fb495-tc6df eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif9467aaade1 [] [] }} ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.010 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.044 [INFO][4652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" HandleID="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.044 [INFO][4652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" HandleID="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059e020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66cb8fb495-tc6df", "timestamp":"2025-09-12 23:56:27.044720325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.045 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.045 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.045 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.062 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.070 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.080 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.082 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.085 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.085 [INFO][4652] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.088 [INFO][4652] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62 Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.093 [INFO][4652] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.103 [INFO][4652] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.103 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" host="localhost" Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.103 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:27.145507 containerd[1447]: 2025-09-12 23:56:27.103 [INFO][4652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" HandleID="k8s-pod-network.327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:27.146556 containerd[1447]: 2025-09-12 23:56:27.110 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0", GenerateName:"calico-kube-controllers-66cb8fb495-", Namespace:"calico-system", SelfLink:"", UID:"4ac91082-d88e-4327-ad72-86092b0b92eb", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb8fb495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66cb8fb495-tc6df", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9467aaade1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:27.146556 containerd[1447]: 2025-09-12 23:56:27.111 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:27.146556 containerd[1447]: 2025-09-12 23:56:27.111 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9467aaade1 ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:27.146556 containerd[1447]: 2025-09-12 23:56:27.119 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:27.146556 containerd[1447]: 2025-09-12 23:56:27.121 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0", GenerateName:"calico-kube-controllers-66cb8fb495-", Namespace:"calico-system", SelfLink:"", UID:"4ac91082-d88e-4327-ad72-86092b0b92eb", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb8fb495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62", Pod:"calico-kube-controllers-66cb8fb495-tc6df", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9467aaade1", MAC:"06:a5:a0:32:8d:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:27.146556 containerd[1447]: 2025-09-12 23:56:27.139 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62" Namespace="calico-system" Pod="calico-kube-controllers-66cb8fb495-tc6df" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:27.171166 containerd[1447]: time="2025-09-12T23:56:27.171009661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:27.171166 containerd[1447]: time="2025-09-12T23:56:27.171095140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:27.171697 containerd[1447]: time="2025-09-12T23:56:27.171474496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:27.171697 containerd[1447]: time="2025-09-12T23:56:27.171579855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:27.196956 systemd[1]: Started cri-containerd-327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62.scope - libcontainer container 327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62. Sep 12 23:56:27.210148 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:27.239667 containerd[1447]: time="2025-09-12T23:56:27.239617272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb8fb495-tc6df,Uid:4ac91082-d88e-4327-ad72-86092b0b92eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62\"" Sep 12 23:56:27.266949 systemd-networkd[1374]: cali4c2e1d0ed3b: Gained IPv6LL Sep 12 23:56:27.447182 containerd[1447]: time="2025-09-12T23:56:27.447059649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:27.448607 containerd[1447]: time="2025-09-12T23:56:27.448505514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 23:56:27.452446 containerd[1447]: time="2025-09-12T23:56:27.452386834Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:27.454840 containerd[1447]: time="2025-09-12T23:56:27.454775730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:27.455415 containerd[1447]: time="2025-09-12T23:56:27.455376163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.136378671s" Sep 12 23:56:27.455475 containerd[1447]: time="2025-09-12T23:56:27.455415803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 23:56:27.457939 containerd[1447]: time="2025-09-12T23:56:27.456559351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 23:56:27.458617 containerd[1447]: time="2025-09-12T23:56:27.458581930Z" level=info msg="CreateContainer within sandbox \"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:56:27.474735 containerd[1447]: time="2025-09-12T23:56:27.474678124Z" level=info msg="CreateContainer within sandbox \"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9365c67d31394923639b745d87e31d0e540047e515d7f0693c30f6cbe32617ce\"" Sep 12 23:56:27.475211 containerd[1447]: time="2025-09-12T23:56:27.475182679Z" level=info msg="StartContainer for \"9365c67d31394923639b745d87e31d0e540047e515d7f0693c30f6cbe32617ce\"" Sep 12 23:56:27.503971 systemd[1]: Started cri-containerd-9365c67d31394923639b745d87e31d0e540047e515d7f0693c30f6cbe32617ce.scope - libcontainer container 9365c67d31394923639b745d87e31d0e540047e515d7f0693c30f6cbe32617ce. Sep 12 23:56:27.545235 containerd[1447]: time="2025-09-12T23:56:27.545188916Z" level=info msg="StartContainer for \"9365c67d31394923639b745d87e31d0e540047e515d7f0693c30f6cbe32617ce\" returns successfully" Sep 12 23:56:27.824152 containerd[1447]: time="2025-09-12T23:56:27.824038355Z" level=info msg="StopPodSandbox for \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\"" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.888 [INFO][4764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.889 [INFO][4764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" iface="eth0" netns="/var/run/netns/cni-8a97cee1-43ed-f3b1-bf71-f48e96d4c5d7" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.889 [INFO][4764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" iface="eth0" netns="/var/run/netns/cni-8a97cee1-43ed-f3b1-bf71-f48e96d4c5d7" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.889 [INFO][4764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" iface="eth0" netns="/var/run/netns/cni-8a97cee1-43ed-f3b1-bf71-f48e96d4c5d7" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.889 [INFO][4764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.889 [INFO][4764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.913 [INFO][4774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.913 [INFO][4774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.913 [INFO][4774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.926 [WARNING][4774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.927 [INFO][4774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.929 [INFO][4774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:27.936861 containerd[1447]: 2025-09-12 23:56:27.934 [INFO][4764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:27.938665 containerd[1447]: time="2025-09-12T23:56:27.937050428Z" level=info msg="TearDown network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\" successfully" Sep 12 23:56:27.938665 containerd[1447]: time="2025-09-12T23:56:27.937080308Z" level=info msg="StopPodSandbox for \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\" returns successfully" Sep 12 23:56:27.938665 containerd[1447]: time="2025-09-12T23:56:27.937711541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-jv87r,Uid:84ebbe55-2c84-464d-aba6-aefb412ce42b,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:56:27.943583 systemd[1]: run-containerd-runc-k8s.io-327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62-runc.SStqwb.mount: Deactivated successfully. Sep 12 23:56:27.943685 systemd[1]: run-netns-cni\x2d8a97cee1\x2d43ed\x2df3b1\x2dbf71\x2df48e96d4c5d7.mount: Deactivated successfully. Sep 12 23:56:28.110179 kubelet[2479]: E0912 23:56:28.109430 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:28.110179 kubelet[2479]: E0912 23:56:28.109796 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:28.129389 systemd-networkd[1374]: califf2b7b16249: Link UP Sep 12 23:56:28.129877 systemd-networkd[1374]: califf2b7b16249: Gained carrier Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.032 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0 calico-apiserver-6dbd74567- calico-apiserver 84ebbe55-2c84-464d-aba6-aefb412ce42b 985 0 2025-09-12 23:56:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dbd74567 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dbd74567-jv87r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf2b7b16249 [] [] }} ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.032 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.058 [INFO][4797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" HandleID="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.058 [INFO][4797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" HandleID="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dbd74567-jv87r", "timestamp":"2025-09-12 23:56:28.058230631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.058 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.058 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.058 [INFO][4797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.070 [INFO][4797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.081 [INFO][4797] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.092 [INFO][4797] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.095 [INFO][4797] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.103 [INFO][4797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.103 [INFO][4797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.105 [INFO][4797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344 Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.110 [INFO][4797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.122 [INFO][4797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.122 [INFO][4797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" host="localhost" Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.122 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:28.151557 containerd[1447]: 2025-09-12 23:56:28.122 [INFO][4797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" HandleID="k8s-pod-network.7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:28.152294 containerd[1447]: 2025-09-12 23:56:28.124 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"84ebbe55-2c84-464d-aba6-aefb412ce42b", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dbd74567-jv87r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2b7b16249", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:28.152294 containerd[1447]: 2025-09-12 23:56:28.125 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:28.152294 containerd[1447]: 2025-09-12 23:56:28.125 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf2b7b16249 ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:28.152294 containerd[1447]: 2025-09-12 23:56:28.130 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:28.152294 containerd[1447]: 2025-09-12 23:56:28.130 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"84ebbe55-2c84-464d-aba6-aefb412ce42b", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344", Pod:"calico-apiserver-6dbd74567-jv87r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2b7b16249", MAC:"de:c9:7d:fc:a8:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:28.152294 containerd[1447]: 2025-09-12 23:56:28.146 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-jv87r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:28.169692 containerd[1447]: time="2025-09-12T23:56:28.169618710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:28.169692 containerd[1447]: time="2025-09-12T23:56:28.169665950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:28.169692 containerd[1447]: time="2025-09-12T23:56:28.169677190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:28.169964 containerd[1447]: time="2025-09-12T23:56:28.169745029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:28.189952 systemd[1]: Started cri-containerd-7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344.scope - libcontainer container 7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344. Sep 12 23:56:28.205601 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:28.227861 systemd-networkd[1374]: cali4751d6d8ff1: Gained IPv6LL Sep 12 23:56:28.236821 containerd[1447]: time="2025-09-12T23:56:28.236771874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-jv87r,Uid:84ebbe55-2c84-464d-aba6-aefb412ce42b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344\"" Sep 12 23:56:28.825139 containerd[1447]: time="2025-09-12T23:56:28.825040433Z" level=info msg="StopPodSandbox for \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\"" Sep 12 23:56:28.825680 containerd[1447]: time="2025-09-12T23:56:28.825434749Z" level=info msg="StopPodSandbox for \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\"" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.907 [INFO][4879] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.908 [INFO][4879] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" iface="eth0" netns="/var/run/netns/cni-3a60fdfd-5c81-f928-ddaa-8b07027fad2c" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.908 [INFO][4879] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" iface="eth0" netns="/var/run/netns/cni-3a60fdfd-5c81-f928-ddaa-8b07027fad2c" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.908 [INFO][4879] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" iface="eth0" netns="/var/run/netns/cni-3a60fdfd-5c81-f928-ddaa-8b07027fad2c" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.908 [INFO][4879] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.908 [INFO][4879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.943 [INFO][4903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.943 [INFO][4903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.943 [INFO][4903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.953 [WARNING][4903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.953 [INFO][4903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.955 [INFO][4903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:28.962454 containerd[1447]: 2025-09-12 23:56:28.957 [INFO][4879] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:28.964253 containerd[1447]: time="2025-09-12T23:56:28.963626757Z" level=info msg="TearDown network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\" successfully" Sep 12 23:56:28.964253 containerd[1447]: time="2025-09-12T23:56:28.963665317Z" level=info msg="StopPodSandbox for \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\" returns successfully" Sep 12 23:56:28.966590 containerd[1447]: time="2025-09-12T23:56:28.966555568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fbbp9,Uid:a133ea3e-84b0-48ca-a5f9-b58285cab3ba,Namespace:calico-system,Attempt:1,}" Sep 12 23:56:28.967177 systemd[1]: run-netns-cni\x2d3a60fdfd\x2d5c81\x2df928\x2dddaa\x2d8b07027fad2c.mount: Deactivated successfully. Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.925 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.925 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" iface="eth0" netns="/var/run/netns/cni-696f6abb-2699-96f0-697e-9a39f6e630b2" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.926 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" iface="eth0" netns="/var/run/netns/cni-696f6abb-2699-96f0-697e-9a39f6e630b2" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.926 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" iface="eth0" netns="/var/run/netns/cni-696f6abb-2699-96f0-697e-9a39f6e630b2" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.926 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.926 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.958 [INFO][4910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.959 [INFO][4910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.959 [INFO][4910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.978 [WARNING][4910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.978 [INFO][4910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.980 [INFO][4910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:28.984432 containerd[1447]: 2025-09-12 23:56:28.982 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:28.997422 containerd[1447]: time="2025-09-12T23:56:28.994890123Z" level=info msg="TearDown network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\" successfully" Sep 12 23:56:28.997422 containerd[1447]: time="2025-09-12T23:56:28.994932522Z" level=info msg="StopPodSandbox for \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\" returns successfully" Sep 12 23:56:28.997019 systemd[1]: run-netns-cni\x2d696f6abb\x2d2699\x2d96f0\x2d697e\x2d9a39f6e630b2.mount: Deactivated successfully. Sep 12 23:56:28.998092 containerd[1447]: time="2025-09-12T23:56:28.997919652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-2bsfz,Uid:6e072d2b-0542-4e7a-92e2-10800c8d71d7,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:56:29.058904 systemd-networkd[1374]: calif9467aaade1: Gained IPv6LL Sep 12 23:56:29.114281 kubelet[2479]: E0912 23:56:29.114171 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:56:29.235006 systemd-networkd[1374]: cali3c073ee94a8: Link UP Sep 12 23:56:29.235769 systemd-networkd[1374]: cali3c073ee94a8: Gained carrier Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.059 [INFO][4919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--fbbp9-eth0 goldmane-54d579b49d- calico-system a133ea3e-84b0-48ca-a5f9-b58285cab3ba 997 0 2025-09-12 23:56:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-fbbp9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3c073ee94a8 [] [] }} ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.059 [INFO][4919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.103 [INFO][4949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" HandleID="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.104 [INFO][4949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" HandleID="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-fbbp9", "timestamp":"2025-09-12 23:56:29.103847451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.104 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.104 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.105 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.186 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.192 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.199 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.202 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.204 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.204 [INFO][4949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.207 [INFO][4949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6 Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.214 [INFO][4949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.224 [INFO][4949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.224 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" host="localhost" Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.224 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:29.256262 containerd[1447]: 2025-09-12 23:56:29.224 [INFO][4949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" HandleID="k8s-pod-network.cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:29.256914 containerd[1447]: 2025-09-12 23:56:29.232 [INFO][4919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--fbbp9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a133ea3e-84b0-48ca-a5f9-b58285cab3ba", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-fbbp9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c073ee94a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:29.256914 containerd[1447]: 2025-09-12 23:56:29.233 [INFO][4919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:29.256914 containerd[1447]: 2025-09-12 23:56:29.233 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c073ee94a8 ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:29.256914 containerd[1447]: 2025-09-12 23:56:29.237 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:29.256914 containerd[1447]: 2025-09-12 23:56:29.237 [INFO][4919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--fbbp9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a133ea3e-84b0-48ca-a5f9-b58285cab3ba", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6", Pod:"goldmane-54d579b49d-fbbp9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c073ee94a8", MAC:"66:66:79:6c:9d:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:29.256914 containerd[1447]: 2025-09-12 23:56:29.252 [INFO][4919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6" Namespace="calico-system" Pod="goldmane-54d579b49d-fbbp9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:29.313960 containerd[1447]: time="2025-09-12T23:56:29.312047127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:29.313960 containerd[1447]: time="2025-09-12T23:56:29.313872269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:29.313960 containerd[1447]: time="2025-09-12T23:56:29.313902389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:29.314140 containerd[1447]: time="2025-09-12T23:56:29.314004068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:29.331133 systemd-networkd[1374]: calicdb9b448b36: Link UP Sep 12 23:56:29.332409 systemd-networkd[1374]: calicdb9b448b36: Gained carrier Sep 12 23:56:29.343111 systemd[1]: Started cri-containerd-cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6.scope - libcontainer container cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6. Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.099 [INFO][4933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0 calico-apiserver-6dbd74567- calico-apiserver 6e072d2b-0542-4e7a-92e2-10800c8d71d7 998 0 2025-09-12 23:56:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dbd74567 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dbd74567-2bsfz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicdb9b448b36 [] [] }} ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.099 [INFO][4933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.136 [INFO][4959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" HandleID="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.136 [INFO][4959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" HandleID="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000340570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dbd74567-2bsfz", "timestamp":"2025-09-12 23:56:29.136027575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.136 [INFO][4959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.224 [INFO][4959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.224 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.286 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.292 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.299 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.302 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.305 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.305 [INFO][4959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.307 [INFO][4959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1 Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.313 [INFO][4959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.320 [INFO][4959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.320 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" host="localhost" Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.320 [INFO][4959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:29.363927 containerd[1447]: 2025-09-12 23:56:29.320 [INFO][4959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" HandleID="k8s-pod-network.f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:29.364585 containerd[1447]: 2025-09-12 23:56:29.327 [INFO][4933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e072d2b-0542-4e7a-92e2-10800c8d71d7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dbd74567-2bsfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdb9b448b36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:29.364585 containerd[1447]: 2025-09-12 23:56:29.328 [INFO][4933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:29.364585 containerd[1447]: 2025-09-12 23:56:29.328 [INFO][4933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdb9b448b36 ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:29.364585 containerd[1447]: 2025-09-12 23:56:29.333 [INFO][4933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:29.364585 containerd[1447]: 2025-09-12 23:56:29.339 [INFO][4933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e072d2b-0542-4e7a-92e2-10800c8d71d7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1", Pod:"calico-apiserver-6dbd74567-2bsfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdb9b448b36", MAC:"f6:51:46:06:43:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:29.364585 containerd[1447]: 2025-09-12 23:56:29.361 [INFO][4933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1" Namespace="calico-apiserver" Pod="calico-apiserver-6dbd74567-2bsfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:29.374017 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:29.390720 containerd[1447]: time="2025-09-12T23:56:29.390413998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:29.391678 containerd[1447]: time="2025-09-12T23:56:29.390547876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:29.394988 containerd[1447]: time="2025-09-12T23:56:29.391664025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:29.394988 containerd[1447]: time="2025-09-12T23:56:29.391795904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:29.425056 systemd[1]: Started cri-containerd-f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1.scope - libcontainer container f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1. Sep 12 23:56:29.451037 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:56:29.495466 containerd[1447]: time="2025-09-12T23:56:29.495409047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fbbp9,Uid:a133ea3e-84b0-48ca-a5f9-b58285cab3ba,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6\"" Sep 12 23:56:29.501776 containerd[1447]: time="2025-09-12T23:56:29.501660905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbd74567-2bsfz,Uid:6e072d2b-0542-4e7a-92e2-10800c8d71d7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1\"" Sep 12 23:56:29.666824 containerd[1447]: time="2025-09-12T23:56:29.666674885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:29.668340 containerd[1447]: time="2025-09-12T23:56:29.668112191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 23:56:29.673988 containerd[1447]: time="2025-09-12T23:56:29.673932494Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:29.678450 containerd[1447]: time="2025-09-12T23:56:29.678408210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:29.679799 containerd[1447]: time="2025-09-12T23:56:29.679681557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.223089086s" Sep 12 23:56:29.679890 containerd[1447]: time="2025-09-12T23:56:29.679809956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 23:56:29.682204 containerd[1447]: time="2025-09-12T23:56:29.681279101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:56:29.687631 containerd[1447]: time="2025-09-12T23:56:29.686915286Z" level=info msg="CreateContainer within sandbox \"327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 23:56:29.702314 containerd[1447]: time="2025-09-12T23:56:29.702268975Z" level=info msg="CreateContainer within sandbox \"327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6cc38b6b18b6c4ab8b7300c25bea6b3e9ce33cdd3f36e2a0f645fb434c89d551\"" Sep 12 23:56:29.703216 containerd[1447]: time="2025-09-12T23:56:29.703185966Z" level=info msg="StartContainer for \"6cc38b6b18b6c4ab8b7300c25bea6b3e9ce33cdd3f36e2a0f645fb434c89d551\"" Sep 12 23:56:29.734994 systemd[1]: Started cri-containerd-6cc38b6b18b6c4ab8b7300c25bea6b3e9ce33cdd3f36e2a0f645fb434c89d551.scope - libcontainer container 6cc38b6b18b6c4ab8b7300c25bea6b3e9ce33cdd3f36e2a0f645fb434c89d551. Sep 12 23:56:29.785703 containerd[1447]: time="2025-09-12T23:56:29.785645357Z" level=info msg="StartContainer for \"6cc38b6b18b6c4ab8b7300c25bea6b3e9ce33cdd3f36e2a0f645fb434c89d551\" returns successfully" Sep 12 23:56:30.083045 systemd-networkd[1374]: califf2b7b16249: Gained IPv6LL Sep 12 23:56:30.135282 kubelet[2479]: I0912 23:56:30.135119 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66cb8fb495-tc6df" podStartSLOduration=21.699187468 podStartE2EDuration="24.135101036s" podCreationTimestamp="2025-09-12 23:56:06 +0000 UTC" firstStartedPulling="2025-09-12 23:56:27.2446597 +0000 UTC m=+40.508890939" lastFinishedPulling="2025-09-12 23:56:29.680573228 +0000 UTC m=+42.944804507" observedRunningTime="2025-09-12 23:56:30.134129126 +0000 UTC m=+43.398360365" watchObservedRunningTime="2025-09-12 23:56:30.135101036 +0000 UTC m=+43.399332275" Sep 12 23:56:30.595955 systemd-networkd[1374]: cali3c073ee94a8: Gained IPv6LL Sep 12 23:56:31.035689 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:41518.service - OpenSSH per-connection server daemon (10.0.0.1:41518). Sep 12 23:56:31.089884 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 41518 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:31.091616 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:31.095747 systemd-logind[1433]: New session 8 of user core. Sep 12 23:56:31.107933 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:56:31.169992 systemd-networkd[1374]: calicdb9b448b36: Gained IPv6LL Sep 12 23:56:31.480479 sshd[5149]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:31.485102 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:41518.service: Deactivated successfully. Sep 12 23:56:31.489847 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:56:31.490709 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:56:31.492158 systemd-logind[1433]: Removed session 8. Sep 12 23:56:31.525186 containerd[1447]: time="2025-09-12T23:56:31.525126023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:31.526086 containerd[1447]: time="2025-09-12T23:56:31.525941495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 23:56:31.529052 containerd[1447]: time="2025-09-12T23:56:31.527726158Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:31.533773 containerd[1447]: time="2025-09-12T23:56:31.531390724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:31.533773 containerd[1447]: time="2025-09-12T23:56:31.531466323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.850097223s" Sep 12 23:56:31.533773 containerd[1447]: time="2025-09-12T23:56:31.532972269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 23:56:31.535396 containerd[1447]: time="2025-09-12T23:56:31.535154489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:56:31.536657 containerd[1447]: time="2025-09-12T23:56:31.536618595Z" level=info msg="CreateContainer within sandbox \"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:56:31.553101 containerd[1447]: time="2025-09-12T23:56:31.553048121Z" level=info msg="CreateContainer within sandbox \"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4119c43fae1712089740ee7741a518bc15a94b1e9d7c2237aed1e67f2cb84a7b\"" Sep 12 23:56:31.553666 containerd[1447]: time="2025-09-12T23:56:31.553626836Z" level=info msg="StartContainer for \"4119c43fae1712089740ee7741a518bc15a94b1e9d7c2237aed1e67f2cb84a7b\"" Sep 12 23:56:31.587996 systemd[1]: Started cri-containerd-4119c43fae1712089740ee7741a518bc15a94b1e9d7c2237aed1e67f2cb84a7b.scope - libcontainer container 4119c43fae1712089740ee7741a518bc15a94b1e9d7c2237aed1e67f2cb84a7b. Sep 12 23:56:31.631269 containerd[1447]: time="2025-09-12T23:56:31.631221108Z" level=info msg="StartContainer for \"4119c43fae1712089740ee7741a518bc15a94b1e9d7c2237aed1e67f2cb84a7b\" returns successfully" Sep 12 23:56:31.899143 kubelet[2479]: I0912 23:56:31.899074 2479 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:56:31.906669 kubelet[2479]: I0912 23:56:31.906632 2479 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:56:33.416416 containerd[1447]: time="2025-09-12T23:56:33.416328267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:33.418935 containerd[1447]: time="2025-09-12T23:56:33.418886884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 23:56:33.419555 containerd[1447]: time="2025-09-12T23:56:33.419342840Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:33.421721 containerd[1447]: time="2025-09-12T23:56:33.421679419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:33.423583 containerd[1447]: time="2025-09-12T23:56:33.423546042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.888353795s" Sep 12 23:56:33.423874 containerd[1447]: time="2025-09-12T23:56:33.423588602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:56:33.424798 containerd[1447]: time="2025-09-12T23:56:33.424582553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 23:56:33.429224 containerd[1447]: time="2025-09-12T23:56:33.429184192Z" level=info msg="CreateContainer within sandbox \"7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:56:33.453218 containerd[1447]: time="2025-09-12T23:56:33.453150857Z" level=info msg="CreateContainer within sandbox \"7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea2483695b19f1f990a07fed685bf23028c4e305eb025d94ec9f97614e1e20e0\"" Sep 12 23:56:33.455104 containerd[1447]: time="2025-09-12T23:56:33.455065400Z" level=info msg="StartContainer for \"ea2483695b19f1f990a07fed685bf23028c4e305eb025d94ec9f97614e1e20e0\"" Sep 12 23:56:33.506007 systemd[1]: Started cri-containerd-ea2483695b19f1f990a07fed685bf23028c4e305eb025d94ec9f97614e1e20e0.scope - libcontainer container ea2483695b19f1f990a07fed685bf23028c4e305eb025d94ec9f97614e1e20e0. Sep 12 23:56:33.614995 containerd[1447]: time="2025-09-12T23:56:33.614857965Z" level=info msg="StartContainer for \"ea2483695b19f1f990a07fed685bf23028c4e305eb025d94ec9f97614e1e20e0\" returns successfully" Sep 12 23:56:34.150796 kubelet[2479]: I0912 23:56:34.150550 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w5b24" podStartSLOduration=22.93410427 podStartE2EDuration="28.150531503s" podCreationTimestamp="2025-09-12 23:56:06 +0000 UTC" firstStartedPulling="2025-09-12 23:56:26.318581697 +0000 UTC m=+39.582812896" lastFinishedPulling="2025-09-12 23:56:31.53500889 +0000 UTC m=+44.799240129" observedRunningTime="2025-09-12 23:56:32.14533796 +0000 UTC m=+45.409569199" watchObservedRunningTime="2025-09-12 23:56:34.150531503 +0000 UTC m=+47.414762742" Sep 12 23:56:34.151228 kubelet[2479]: I0912 23:56:34.150874 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dbd74567-jv87r" podStartSLOduration=27.964661685 podStartE2EDuration="33.1508671s" podCreationTimestamp="2025-09-12 23:56:01 +0000 UTC" firstStartedPulling="2025-09-12 23:56:28.238251579 +0000 UTC m=+41.502482818" lastFinishedPulling="2025-09-12 23:56:33.424456994 +0000 UTC m=+46.688688233" observedRunningTime="2025-09-12 23:56:34.150143667 +0000 UTC m=+47.414374906" watchObservedRunningTime="2025-09-12 23:56:34.1508671 +0000 UTC m=+47.415098299" Sep 12 23:56:35.248965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376812899.mount: Deactivated successfully. Sep 12 23:56:35.707227 containerd[1447]: time="2025-09-12T23:56:35.706226214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:35.711192 containerd[1447]: time="2025-09-12T23:56:35.711144132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 23:56:35.713387 containerd[1447]: time="2025-09-12T23:56:35.713353313Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:35.721912 containerd[1447]: time="2025-09-12T23:56:35.721861239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:35.722877 containerd[1447]: time="2025-09-12T23:56:35.722839471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.298223918s" Sep 12 23:56:35.722946 containerd[1447]: time="2025-09-12T23:56:35.722885110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 23:56:35.724673 containerd[1447]: time="2025-09-12T23:56:35.724442297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:56:35.725737 containerd[1447]: time="2025-09-12T23:56:35.725696966Z" level=info msg="CreateContainer within sandbox \"cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 23:56:35.751679 containerd[1447]: time="2025-09-12T23:56:35.751624302Z" level=info msg="CreateContainer within sandbox \"cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7be886c6ddfbf6157c3a397a18f29a0a1316c9e773f537953edd60a8d89a83a8\"" Sep 12 23:56:35.752501 containerd[1447]: time="2025-09-12T23:56:35.752470615Z" level=info msg="StartContainer for \"7be886c6ddfbf6157c3a397a18f29a0a1316c9e773f537953edd60a8d89a83a8\"" Sep 12 23:56:35.797938 systemd[1]: Started cri-containerd-7be886c6ddfbf6157c3a397a18f29a0a1316c9e773f537953edd60a8d89a83a8.scope - libcontainer container 7be886c6ddfbf6157c3a397a18f29a0a1316c9e773f537953edd60a8d89a83a8. Sep 12 23:56:35.833868 containerd[1447]: time="2025-09-12T23:56:35.833800153Z" level=info msg="StartContainer for \"7be886c6ddfbf6157c3a397a18f29a0a1316c9e773f537953edd60a8d89a83a8\" returns successfully" Sep 12 23:56:36.049145 containerd[1447]: time="2025-09-12T23:56:36.049096702Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:36.051183 containerd[1447]: time="2025-09-12T23:56:36.051129405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 23:56:36.062128 containerd[1447]: time="2025-09-12T23:56:36.062086952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 337.405417ms" Sep 12 23:56:36.062128 containerd[1447]: time="2025-09-12T23:56:36.062129992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:56:36.065391 containerd[1447]: time="2025-09-12T23:56:36.065355485Z" level=info msg="CreateContainer within sandbox \"f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:56:36.089789 containerd[1447]: time="2025-09-12T23:56:36.089718798Z" level=info msg="CreateContainer within sandbox \"f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"68024aacdbfef0868a38cb3a24cd2b4ac4d59e6123c9933e5c08433031d4206f\"" Sep 12 23:56:36.090778 containerd[1447]: time="2025-09-12T23:56:36.090545591Z" level=info msg="StartContainer for \"68024aacdbfef0868a38cb3a24cd2b4ac4d59e6123c9933e5c08433031d4206f\"" Sep 12 23:56:36.117323 systemd[1]: run-containerd-runc-k8s.io-68024aacdbfef0868a38cb3a24cd2b4ac4d59e6123c9933e5c08433031d4206f-runc.iKnx0g.mount: Deactivated successfully. Sep 12 23:56:36.130942 systemd[1]: Started cri-containerd-68024aacdbfef0868a38cb3a24cd2b4ac4d59e6123c9933e5c08433031d4206f.scope - libcontainer container 68024aacdbfef0868a38cb3a24cd2b4ac4d59e6123c9933e5c08433031d4206f. Sep 12 23:56:36.186813 containerd[1447]: time="2025-09-12T23:56:36.186692617Z" level=info msg="StartContainer for \"68024aacdbfef0868a38cb3a24cd2b4ac4d59e6123c9933e5c08433031d4206f\" returns successfully" Sep 12 23:56:36.491494 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:41526.service - OpenSSH per-connection server daemon (10.0.0.1:41526). Sep 12 23:56:36.550418 sshd[5380]: Accepted publickey for core from 10.0.0.1 port 41526 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:36.552528 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:36.557773 systemd-logind[1433]: New session 9 of user core. Sep 12 23:56:36.566964 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:56:37.008530 sshd[5380]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:37.011855 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:56:37.012135 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:41526.service: Deactivated successfully. Sep 12 23:56:37.015300 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:56:37.018229 systemd-logind[1433]: Removed session 9. Sep 12 23:56:37.179450 kubelet[2479]: I0912 23:56:37.178198 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-fbbp9" podStartSLOduration=24.951378849 podStartE2EDuration="31.178181041s" podCreationTimestamp="2025-09-12 23:56:06 +0000 UTC" firstStartedPulling="2025-09-12 23:56:29.497410747 +0000 UTC m=+42.761642026" lastFinishedPulling="2025-09-12 23:56:35.724212979 +0000 UTC m=+48.988444218" observedRunningTime="2025-09-12 23:56:36.159538287 +0000 UTC m=+49.423769566" watchObservedRunningTime="2025-09-12 23:56:37.178181041 +0000 UTC m=+50.442412280" Sep 12 23:56:37.182377 kubelet[2479]: I0912 23:56:37.180877 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dbd74567-2bsfz" podStartSLOduration=29.621218243 podStartE2EDuration="36.180822179s" podCreationTimestamp="2025-09-12 23:56:01 +0000 UTC" firstStartedPulling="2025-09-12 23:56:29.503543287 +0000 UTC m=+42.767774486" lastFinishedPulling="2025-09-12 23:56:36.063147223 +0000 UTC m=+49.327378422" observedRunningTime="2025-09-12 23:56:37.179138553 +0000 UTC m=+50.443369792" watchObservedRunningTime="2025-09-12 23:56:37.180822179 +0000 UTC m=+50.445053538" Sep 12 23:56:42.019827 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:34738.service - OpenSSH per-connection server daemon (10.0.0.1:34738). Sep 12 23:56:42.087201 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 34738 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:42.089260 sshd[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:42.093229 systemd-logind[1433]: New session 10 of user core. Sep 12 23:56:42.100985 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:56:42.329484 sshd[5430]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:42.336753 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:34738.service: Deactivated successfully. Sep 12 23:56:42.339191 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:56:42.340940 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:56:42.347352 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:34752.service - OpenSSH per-connection server daemon (10.0.0.1:34752). Sep 12 23:56:42.349295 systemd-logind[1433]: Removed session 10. Sep 12 23:56:42.388185 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 34752 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:42.389564 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:42.393820 systemd-logind[1433]: New session 11 of user core. Sep 12 23:56:42.404026 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:56:42.596972 sshd[5449]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:42.609638 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:34752.service: Deactivated successfully. Sep 12 23:56:42.612462 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:56:42.615718 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:56:42.626532 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:34766.service - OpenSSH per-connection server daemon (10.0.0.1:34766). Sep 12 23:56:42.629323 systemd-logind[1433]: Removed session 11. Sep 12 23:56:42.669773 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 34766 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:42.671056 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:42.675007 systemd-logind[1433]: New session 12 of user core. Sep 12 23:56:42.688995 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:56:42.853061 sshd[5461]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:42.858107 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:34766.service: Deactivated successfully. Sep 12 23:56:42.859810 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:56:42.860614 systemd-logind[1433]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:56:42.861569 systemd-logind[1433]: Removed session 12. Sep 12 23:56:46.815336 containerd[1447]: time="2025-09-12T23:56:46.815212704Z" level=info msg="StopPodSandbox for \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\"" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.883 [WARNING][5484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a49c785-0fd4-496d-8891-33121806033d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b", Pod:"coredns-668d6bf9bc-ngcpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c2e1d0ed3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.883 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.883 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" iface="eth0" netns="" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.884 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.884 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.917 [INFO][5495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.918 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.919 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.928 [WARNING][5495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.928 [INFO][5495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.930 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:46.935872 containerd[1447]: 2025-09-12 23:56:46.932 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:46.937081 containerd[1447]: time="2025-09-12T23:56:46.935910816Z" level=info msg="TearDown network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\" successfully" Sep 12 23:56:46.937081 containerd[1447]: time="2025-09-12T23:56:46.935935176Z" level=info msg="StopPodSandbox for \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\" returns successfully" Sep 12 23:56:46.937081 containerd[1447]: time="2025-09-12T23:56:46.936539812Z" level=info msg="RemovePodSandbox for \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\"" Sep 12 23:56:46.945655 containerd[1447]: time="2025-09-12T23:56:46.945500866Z" level=info msg="Forcibly stopping sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\"" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.979 [WARNING][5514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a49c785-0fd4-496d-8891-33121806033d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bb2b02956d13fc9afe8f91d2370476e35b73461b4dfea847fd6c5b937ac948b", Pod:"coredns-668d6bf9bc-ngcpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c2e1d0ed3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.979 [INFO][5514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.979 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" iface="eth0" netns="" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.979 [INFO][5514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.979 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.998 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.998 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:46.998 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:47.008 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:47.008 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" HandleID="k8s-pod-network.2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Workload="localhost-k8s-coredns--668d6bf9bc--ngcpg-eth0" Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:47.010 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.013790 containerd[1447]: 2025-09-12 23:56:47.011 [INFO][5514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170" Sep 12 23:56:47.013790 containerd[1447]: time="2025-09-12T23:56:47.013607886Z" level=info msg="TearDown network for sandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\" successfully" Sep 12 23:56:47.023519 containerd[1447]: time="2025-09-12T23:56:47.023472534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:47.023652 containerd[1447]: time="2025-09-12T23:56:47.023552893Z" level=info msg="RemovePodSandbox \"2527cec58765a7e00c40f1cac1d828d535a9b2063ec407e93da69ea275b01170\" returns successfully" Sep 12 23:56:47.024108 containerd[1447]: time="2025-09-12T23:56:47.024082730Z" level=info msg="StopPodSandbox for \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\"" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.062 [WARNING][5541] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0", GenerateName:"calico-kube-controllers-66cb8fb495-", Namespace:"calico-system", SelfLink:"", UID:"4ac91082-d88e-4327-ad72-86092b0b92eb", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb8fb495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62", Pod:"calico-kube-controllers-66cb8fb495-tc6df", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9467aaade1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.062 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.062 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" iface="eth0" netns="" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.062 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.062 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.081 [INFO][5549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.081 [INFO][5549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.081 [INFO][5549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.093 [WARNING][5549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.093 [INFO][5549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.095 [INFO][5549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.099218 containerd[1447]: 2025-09-12 23:56:47.097 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.099745 containerd[1447]: time="2025-09-12T23:56:47.099248143Z" level=info msg="TearDown network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\" successfully" Sep 12 23:56:47.099745 containerd[1447]: time="2025-09-12T23:56:47.099278983Z" level=info msg="StopPodSandbox for \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\" returns successfully" Sep 12 23:56:47.101827 containerd[1447]: time="2025-09-12T23:56:47.101516966Z" level=info msg="RemovePodSandbox for \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\"" Sep 12 23:56:47.101827 containerd[1447]: time="2025-09-12T23:56:47.101637365Z" level=info msg="Forcibly stopping sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\"" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.134 [WARNING][5567] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0", GenerateName:"calico-kube-controllers-66cb8fb495-", Namespace:"calico-system", SelfLink:"", UID:"4ac91082-d88e-4327-ad72-86092b0b92eb", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb8fb495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"327380752d8e62f10b29ac8fa9a4f786e4c138b5f78018d59e12cca04e7f0a62", Pod:"calico-kube-controllers-66cb8fb495-tc6df", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9467aaade1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.134 [INFO][5567] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.134 [INFO][5567] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" iface="eth0" netns="" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.134 [INFO][5567] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.134 [INFO][5567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.154 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.154 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.154 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.163 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.163 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" HandleID="k8s-pod-network.d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Workload="localhost-k8s-calico--kube--controllers--66cb8fb495--tc6df-eth0" Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.165 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.169035 containerd[1447]: 2025-09-12 23:56:47.167 [INFO][5567] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87" Sep 12 23:56:47.169685 containerd[1447]: time="2025-09-12T23:56:47.169080755Z" level=info msg="TearDown network for sandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\" successfully" Sep 12 23:56:47.178620 containerd[1447]: time="2025-09-12T23:56:47.178486326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:47.178620 containerd[1447]: time="2025-09-12T23:56:47.178546326Z" level=info msg="RemovePodSandbox \"d7862cd3455e78d9c8537b12b34f9adc1795fb19a1c0616560fba0f5c21f6c87\" returns successfully" Sep 12 23:56:47.178852 containerd[1447]: time="2025-09-12T23:56:47.178817444Z" level=info msg="StopPodSandbox for \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\"" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.223 [WARNING][5594] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e072d2b-0542-4e7a-92e2-10800c8d71d7", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1", Pod:"calico-apiserver-6dbd74567-2bsfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdb9b448b36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.223 [INFO][5594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.223 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" iface="eth0" netns="" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.223 [INFO][5594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.223 [INFO][5594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.242 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.242 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.243 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.255 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.255 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.257 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.260745 containerd[1447]: 2025-09-12 23:56:47.259 [INFO][5594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.261532 containerd[1447]: time="2025-09-12T23:56:47.260803047Z" level=info msg="TearDown network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\" successfully" Sep 12 23:56:47.261532 containerd[1447]: time="2025-09-12T23:56:47.260827607Z" level=info msg="StopPodSandbox for \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\" returns successfully" Sep 12 23:56:47.261532 containerd[1447]: time="2025-09-12T23:56:47.261255964Z" level=info msg="RemovePodSandbox for \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\"" Sep 12 23:56:47.261532 containerd[1447]: time="2025-09-12T23:56:47.261286324Z" level=info msg="Forcibly stopping sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\"" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.298 [WARNING][5623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e072d2b-0542-4e7a-92e2-10800c8d71d7", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8e10a3a29ce1512d83af6f70c8a2ca408d216756db5fb6743a01fda2b793eb1", Pod:"calico-apiserver-6dbd74567-2bsfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdb9b448b36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.298 [INFO][5623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.298 [INFO][5623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" iface="eth0" netns="" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.298 [INFO][5623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.298 [INFO][5623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.317 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.317 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.317 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.325 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.325 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" HandleID="k8s-pod-network.4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Workload="localhost-k8s-calico--apiserver--6dbd74567--2bsfz-eth0" Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.327 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.331265 containerd[1447]: 2025-09-12 23:56:47.328 [INFO][5623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0" Sep 12 23:56:47.331736 containerd[1447]: time="2025-09-12T23:56:47.331301414Z" level=info msg="TearDown network for sandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\" successfully" Sep 12 23:56:47.334890 containerd[1447]: time="2025-09-12T23:56:47.334855789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:47.334977 containerd[1447]: time="2025-09-12T23:56:47.334921148Z" level=info msg="RemovePodSandbox \"4d18b140a26707e98bc29d95048a99f5471ce1f3389e6f5dc32d0cd1018ac5b0\" returns successfully" Sep 12 23:56:47.335449 containerd[1447]: time="2025-09-12T23:56:47.335424104Z" level=info msg="StopPodSandbox for \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\"" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.372 [WARNING][5649] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"84ebbe55-2c84-464d-aba6-aefb412ce42b", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344", Pod:"calico-apiserver-6dbd74567-jv87r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2b7b16249", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.372 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.372 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" iface="eth0" netns="" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.372 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.372 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.394 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.394 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.394 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.402 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.402 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.404 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.407829 containerd[1447]: 2025-09-12 23:56:47.406 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.407829 containerd[1447]: time="2025-09-12T23:56:47.407806178Z" level=info msg="TearDown network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\" successfully" Sep 12 23:56:47.408246 containerd[1447]: time="2025-09-12T23:56:47.407838498Z" level=info msg="StopPodSandbox for \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\" returns successfully" Sep 12 23:56:47.408486 containerd[1447]: time="2025-09-12T23:56:47.408437213Z" level=info msg="RemovePodSandbox for \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\"" Sep 12 23:56:47.408486 containerd[1447]: time="2025-09-12T23:56:47.408474373Z" level=info msg="Forcibly stopping sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\"" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.446 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0", GenerateName:"calico-apiserver-6dbd74567-", Namespace:"calico-apiserver", SelfLink:"", UID:"84ebbe55-2c84-464d-aba6-aefb412ce42b", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbd74567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7634ec4d62385765270be983eeb91c1720fdb9cbed1e7cbb965bed22ddb0b344", Pod:"calico-apiserver-6dbd74567-jv87r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2b7b16249", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.446 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.446 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" iface="eth0" netns="" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.446 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.446 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.467 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.467 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.467 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.477 [WARNING][5683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.477 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" HandleID="k8s-pod-network.5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Workload="localhost-k8s-calico--apiserver--6dbd74567--jv87r-eth0" Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.479 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.486871 containerd[1447]: 2025-09-12 23:56:47.482 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a" Sep 12 23:56:47.486871 containerd[1447]: time="2025-09-12T23:56:47.485801450Z" level=info msg="TearDown network for sandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\" successfully" Sep 12 23:56:47.489091 containerd[1447]: time="2025-09-12T23:56:47.488934748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:47.489091 containerd[1447]: time="2025-09-12T23:56:47.489001547Z" level=info msg="RemovePodSandbox \"5f8f1df09264d9600941f6a2b708f7ccd1c3fc1ec0e57da97150ca9f4fa5016a\" returns successfully" Sep 12 23:56:47.489457 containerd[1447]: time="2025-09-12T23:56:47.489432864Z" level=info msg="StopPodSandbox for \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\"" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.529 [WARNING][5701] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" WorkloadEndpoint="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.529 [INFO][5701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.529 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" iface="eth0" netns="" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.529 [INFO][5701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.529 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.550 [INFO][5710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.550 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.550 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.562 [WARNING][5710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.562 [INFO][5710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.563 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.569992 containerd[1447]: 2025-09-12 23:56:47.565 [INFO][5701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.570353 containerd[1447]: time="2025-09-12T23:56:47.570019558Z" level=info msg="TearDown network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\" successfully" Sep 12 23:56:47.570353 containerd[1447]: time="2025-09-12T23:56:47.570045077Z" level=info msg="StopPodSandbox for \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\" returns successfully" Sep 12 23:56:47.570616 containerd[1447]: time="2025-09-12T23:56:47.570592113Z" level=info msg="RemovePodSandbox for \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\"" Sep 12 23:56:47.570657 containerd[1447]: time="2025-09-12T23:56:47.570623353Z" level=info msg="Forcibly stopping sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\"" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.610 [WARNING][5727] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" WorkloadEndpoint="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.610 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.610 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" iface="eth0" netns="" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.610 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.610 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.629 [INFO][5737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.629 [INFO][5737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.629 [INFO][5737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.638 [WARNING][5737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.638 [INFO][5737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" HandleID="k8s-pod-network.aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Workload="localhost-k8s-whisker--5cc464486f--psq9v-eth0" Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.640 [INFO][5737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.644197 containerd[1447]: 2025-09-12 23:56:47.642 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2" Sep 12 23:56:47.644537 containerd[1447]: time="2025-09-12T23:56:47.644250178Z" level=info msg="TearDown network for sandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\" successfully" Sep 12 23:56:47.647374 containerd[1447]: time="2025-09-12T23:56:47.647336195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:47.647445 containerd[1447]: time="2025-09-12T23:56:47.647402515Z" level=info msg="RemovePodSandbox \"aadac5b88b5b764e0f1a49f9e476b96fdf4fa85eb708588a5d354a89b02097f2\" returns successfully" Sep 12 23:56:47.648008 containerd[1447]: time="2025-09-12T23:56:47.647974950Z" level=info msg="StopPodSandbox for \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\"" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.689 [WARNING][5756] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w5b24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a2755c3-03c7-4f05-b24d-8c93e47436ce", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482", Pod:"csi-node-driver-w5b24", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4751d6d8ff1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.689 [INFO][5756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.689 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" iface="eth0" netns="" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.689 [INFO][5756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.689 [INFO][5756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.710 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.711 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.711 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.723 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.723 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.724 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.729570 containerd[1447]: 2025-09-12 23:56:47.726 [INFO][5756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.729570 containerd[1447]: time="2025-09-12T23:56:47.729520757Z" level=info msg="TearDown network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\" successfully" Sep 12 23:56:47.729570 containerd[1447]: time="2025-09-12T23:56:47.729548157Z" level=info msg="StopPodSandbox for \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\" returns successfully" Sep 12 23:56:47.730820 containerd[1447]: time="2025-09-12T23:56:47.730790908Z" level=info msg="RemovePodSandbox for \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\"" Sep 12 23:56:47.730951 containerd[1447]: time="2025-09-12T23:56:47.730827708Z" level=info msg="Forcibly stopping sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\"" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.781 [WARNING][5781] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w5b24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a2755c3-03c7-4f05-b24d-8c93e47436ce", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60b4079d1aa9275eeaecf98f7c461d8ba4b026d5f10dda8d9666f9e4086c4482", Pod:"csi-node-driver-w5b24", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4751d6d8ff1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.781 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.781 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" iface="eth0" netns="" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.781 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.781 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.799 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.799 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.799 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.807 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.807 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" HandleID="k8s-pod-network.6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Workload="localhost-k8s-csi--node--driver--w5b24-eth0" Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.808 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.811946 containerd[1447]: 2025-09-12 23:56:47.810 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce" Sep 12 23:56:47.812552 containerd[1447]: time="2025-09-12T23:56:47.811987637Z" level=info msg="TearDown network for sandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\" successfully" Sep 12 23:56:47.814895 containerd[1447]: time="2025-09-12T23:56:47.814866616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:47.814940 containerd[1447]: time="2025-09-12T23:56:47.814927696Z" level=info msg="RemovePodSandbox \"6af0b214d8f2a51b3566234f261ef5a658b006805681eb42f794166da3fb38ce\" returns successfully" Sep 12 23:56:47.815383 containerd[1447]: time="2025-09-12T23:56:47.815358653Z" level=info msg="StopPodSandbox for \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\"" Sep 12 23:56:47.878093 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:34780.service - OpenSSH per-connection server daemon (10.0.0.1:34780). Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.847 [WARNING][5806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c36fd1a-8b9e-4673-89de-740f2dd47379", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6", Pod:"coredns-668d6bf9bc-4f7pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a117d80538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.847 [INFO][5806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.847 [INFO][5806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" iface="eth0" netns="" Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.847 [INFO][5806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.847 [INFO][5806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.867 [INFO][5814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.867 [INFO][5814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.867 [INFO][5814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.876 [WARNING][5814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.876 [INFO][5814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.879 [INFO][5814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.886860 containerd[1447]: 2025-09-12 23:56:47.882 [INFO][5806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.887571 containerd[1447]: time="2025-09-12T23:56:47.886901572Z" level=info msg="TearDown network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\" successfully" Sep 12 23:56:47.887571 containerd[1447]: time="2025-09-12T23:56:47.886925812Z" level=info msg="StopPodSandbox for \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\" returns successfully" Sep 12 23:56:47.887571 containerd[1447]: time="2025-09-12T23:56:47.887344529Z" level=info msg="RemovePodSandbox for \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\"" Sep 12 23:56:47.887571 containerd[1447]: time="2025-09-12T23:56:47.887376489Z" level=info msg="Forcibly stopping sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\"" Sep 12 23:56:47.937860 sshd[5822]: Accepted publickey for core from 10.0.0.1 port 34780 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:47.939233 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:47.948419 systemd-logind[1433]: New session 13 of user core. Sep 12 23:56:47.950952 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.930 [WARNING][5834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c36fd1a-8b9e-4673-89de-740f2dd47379", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74d0b54f5eec1a13017045469b5098c81f767d9436092b329ee64dfb3c821ba6", Pod:"coredns-668d6bf9bc-4f7pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a117d80538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.930 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.930 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" iface="eth0" netns="" Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.930 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.930 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.952 [INFO][5843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.952 [INFO][5843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.952 [INFO][5843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.961 [WARNING][5843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.961 [INFO][5843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" HandleID="k8s-pod-network.289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Workload="localhost-k8s-coredns--668d6bf9bc--4f7pq-eth0" Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.963 [INFO][5843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.966815 containerd[1447]: 2025-09-12 23:56:47.965 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974" Sep 12 23:56:47.967202 containerd[1447]: time="2025-09-12T23:56:47.966861750Z" level=info msg="TearDown network for sandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\" successfully" Sep 12 23:56:47.972607 containerd[1447]: time="2025-09-12T23:56:47.972347910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:47.972607 containerd[1447]: time="2025-09-12T23:56:47.972411510Z" level=info msg="RemovePodSandbox \"289cc09b22e5eb74e4a502e9d9304cec9e931d7030008416548e8f150c2dd974\" returns successfully" Sep 12 23:56:47.972894 containerd[1447]: time="2025-09-12T23:56:47.972870507Z" level=info msg="StopPodSandbox for \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\"" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.013 [WARNING][5862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--fbbp9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a133ea3e-84b0-48ca-a5f9-b58285cab3ba", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6", Pod:"goldmane-54d579b49d-fbbp9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c073ee94a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.013 [INFO][5862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.013 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" iface="eth0" netns="" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.013 [INFO][5862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.013 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.033 [INFO][5875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.033 [INFO][5875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.033 [INFO][5875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.041 [WARNING][5875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.042 [INFO][5875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.043 [INFO][5875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:48.054871 containerd[1447]: 2025-09-12 23:56:48.050 [INFO][5862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.055285 containerd[1447]: time="2025-09-12T23:56:48.054914314Z" level=info msg="TearDown network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\" successfully" Sep 12 23:56:48.055285 containerd[1447]: time="2025-09-12T23:56:48.054939393Z" level=info msg="StopPodSandbox for \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\" returns successfully" Sep 12 23:56:48.056939 containerd[1447]: time="2025-09-12T23:56:48.056911459Z" level=info msg="RemovePodSandbox for \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\"" Sep 12 23:56:48.056985 containerd[1447]: time="2025-09-12T23:56:48.056950379Z" level=info msg="Forcibly stopping sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\"" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.099 [WARNING][5896] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--fbbp9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a133ea3e-84b0-48ca-a5f9-b58285cab3ba", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf659ac3cec9ff4f050a684e2310467fff7b19c8c8e1420f5fd211778c83c1b6", Pod:"goldmane-54d579b49d-fbbp9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c073ee94a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.101 [INFO][5896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.101 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" iface="eth0" netns="" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.101 [INFO][5896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.101 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.133 [INFO][5905] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.133 [INFO][5905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.134 [INFO][5905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.147 [WARNING][5905] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.147 [INFO][5905] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" HandleID="k8s-pod-network.6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Workload="localhost-k8s-goldmane--54d579b49d--fbbp9-eth0" Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.149 [INFO][5905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:48.156183 containerd[1447]: 2025-09-12 23:56:48.153 [INFO][5896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816" Sep 12 23:56:48.156706 containerd[1447]: time="2025-09-12T23:56:48.156227024Z" level=info msg="TearDown network for sandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\" successfully" Sep 12 23:56:48.164208 containerd[1447]: time="2025-09-12T23:56:48.164126327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:48.164333 containerd[1447]: time="2025-09-12T23:56:48.164259286Z" level=info msg="RemovePodSandbox \"6e732fb2e9070a1bb273f4be1b9d2854ce2c07c193244333dc875d6bf1df3816\" returns successfully" Sep 12 23:56:48.350979 sshd[5822]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:48.363981 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:34780.service: Deactivated successfully. Sep 12 23:56:48.366453 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:56:48.368319 systemd-logind[1433]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:56:48.380084 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:34784.service - OpenSSH per-connection server daemon (10.0.0.1:34784). Sep 12 23:56:48.382540 systemd-logind[1433]: Removed session 13. Sep 12 23:56:48.420019 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:48.421840 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:48.426216 systemd-logind[1433]: New session 14 of user core. Sep 12 23:56:48.431927 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:56:48.628232 sshd[5916]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:48.638949 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:34784.service: Deactivated successfully. Sep 12 23:56:48.640866 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:56:48.642229 systemd-logind[1433]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:56:48.649349 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:34792.service - OpenSSH per-connection server daemon (10.0.0.1:34792). Sep 12 23:56:48.650391 systemd-logind[1433]: Removed session 14. Sep 12 23:56:48.685341 sshd[5929]: Accepted publickey for core from 10.0.0.1 port 34792 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:48.686493 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:48.689885 systemd-logind[1433]: New session 15 of user core. Sep 12 23:56:48.701894 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:56:49.312101 sshd[5929]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:49.321355 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:34792.service: Deactivated successfully. Sep 12 23:56:49.326253 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:56:49.330011 systemd-logind[1433]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:56:49.370148 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:34808.service - OpenSSH per-connection server daemon (10.0.0.1:34808). Sep 12 23:56:49.376776 systemd-logind[1433]: Removed session 15. Sep 12 23:56:49.412041 sshd[5949]: Accepted publickey for core from 10.0.0.1 port 34808 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:49.413451 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:49.420009 systemd-logind[1433]: New session 16 of user core. Sep 12 23:56:49.426939 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:56:49.961021 sshd[5949]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:49.974813 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:34808.service: Deactivated successfully. Sep 12 23:56:49.976403 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:56:49.979263 systemd-logind[1433]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:56:49.989131 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:56700.service - OpenSSH per-connection server daemon (10.0.0.1:56700). Sep 12 23:56:49.994852 systemd-logind[1433]: Removed session 16. Sep 12 23:56:50.028858 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 56700 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:50.030255 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:50.034327 systemd-logind[1433]: New session 17 of user core. Sep 12 23:56:50.042922 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:56:50.181169 sshd[5967]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:50.184546 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:56700.service: Deactivated successfully. Sep 12 23:56:50.186860 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:56:50.189397 systemd-logind[1433]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:56:50.190346 systemd-logind[1433]: Removed session 17. Sep 12 23:56:55.195074 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). Sep 12 23:56:55.243195 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:56:55.244940 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:55.252235 systemd-logind[1433]: New session 18 of user core. Sep 12 23:56:55.261039 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:56:55.404935 sshd[6032]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:55.407526 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:56714.service: Deactivated successfully. Sep 12 23:56:55.409249 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:56:55.410407 systemd-logind[1433]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:56:55.411188 systemd-logind[1433]: Removed session 18. Sep 12 23:57:00.416409 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:52040.service - OpenSSH per-connection server daemon (10.0.0.1:52040). Sep 12 23:57:00.456230 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 52040 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:57:00.457531 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:00.461085 systemd-logind[1433]: New session 19 of user core. Sep 12 23:57:00.470903 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:57:00.588244 sshd[6073]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:00.592180 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:52040.service: Deactivated successfully. Sep 12 23:57:00.595284 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:57:00.596273 systemd-logind[1433]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:57:00.597146 systemd-logind[1433]: Removed session 19. Sep 12 23:57:03.821298 kubelet[2479]: E0912 23:57:03.821244 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:57:05.602561 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:52046.service - OpenSSH per-connection server daemon (10.0.0.1:52046). Sep 12 23:57:05.651123 sshd[6088]: Accepted publickey for core from 10.0.0.1 port 52046 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:57:05.654036 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:05.659390 systemd-logind[1433]: New session 20 of user core. Sep 12 23:57:05.664992 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:57:05.830026 sshd[6088]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:05.834137 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:52046.service: Deactivated successfully. Sep 12 23:57:05.837549 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:57:05.838177 systemd-logind[1433]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:57:05.839494 systemd-logind[1433]: Removed session 20. Sep 12 23:57:10.846141 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:39622.service - OpenSSH per-connection server daemon (10.0.0.1:39622). Sep 12 23:57:10.887693 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 39622 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 12 23:57:10.889351 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:10.893381 systemd-logind[1433]: New session 21 of user core. Sep 12 23:57:10.903981 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:57:11.044985 sshd[6130]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:11.048246 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:39622.service: Deactivated successfully. Sep 12 23:57:11.050073 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:57:11.050675 systemd-logind[1433]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:57:11.051410 systemd-logind[1433]: Removed session 21.