Sep 5 00:10:27.836115 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 00:10:27.836136 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Sep 4 22:50:35 -00 2025 Sep 5 00:10:27.836145 kernel: KASLR enabled Sep 5 00:10:27.836151 kernel: efi: EFI v2.7 by EDK II Sep 5 00:10:27.836156 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 5 00:10:27.836162 kernel: random: crng init done Sep 5 00:10:27.836169 kernel: ACPI: Early table checksum verification disabled Sep 5 00:10:27.836175 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 5 00:10:27.836181 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:10:27.836188 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836194 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836200 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836206 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836212 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836219 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836227 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836234 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836240 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:10:27.836246 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 00:10:27.836252 kernel: NUMA: Failed to initialise from firmware Sep 5 00:10:27.836258 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:10:27.836265 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 5 00:10:27.836271 kernel: Zone ranges: Sep 5 00:10:27.836277 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:10:27.836354 kernel: DMA32 empty Sep 5 00:10:27.836363 kernel: Normal empty Sep 5 00:10:27.836369 kernel: Movable zone start for each node Sep 5 00:10:27.836375 kernel: Early memory node ranges Sep 5 00:10:27.836381 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 5 00:10:27.836388 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 5 00:10:27.836394 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 5 00:10:27.836400 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 00:10:27.836406 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 00:10:27.836413 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 00:10:27.836419 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 00:10:27.836425 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:10:27.836431 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 00:10:27.836439 kernel: psci: probing for conduit method from ACPI. Sep 5 00:10:27.836445 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 00:10:27.836452 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 00:10:27.836461 kernel: psci: Trusted OS migration not required Sep 5 00:10:27.836467 kernel: psci: SMC Calling Convention v1.1 Sep 5 00:10:27.836474 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 00:10:27.836482 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 00:10:27.836489 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 00:10:27.836496 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 00:10:27.836503 kernel: Detected PIPT I-cache on CPU0 Sep 5 00:10:27.836509 kernel: CPU features: detected: GIC system register CPU interface Sep 5 00:10:27.836516 kernel: CPU features: detected: Hardware dirty bit management Sep 5 00:10:27.836523 kernel: CPU features: detected: Spectre-v4 Sep 5 00:10:27.836529 kernel: CPU features: detected: Spectre-BHB Sep 5 00:10:27.836536 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 00:10:27.836543 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 00:10:27.836550 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 00:10:27.836557 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 00:10:27.836564 kernel: alternatives: applying boot alternatives Sep 5 00:10:27.836572 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74b18a518d158648275add16e3ab4f37e237ff7b3b2938818abfe7ffe97d585a Sep 5 00:10:27.836579 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:10:27.836585 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:10:27.836592 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:10:27.836599 kernel: Fallback order for Node 0: 0 Sep 5 00:10:27.836606 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 5 00:10:27.836612 kernel: Policy zone: DMA Sep 5 00:10:27.836619 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:10:27.836626 kernel: software IO TLB: area num 4. Sep 5 00:10:27.836633 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 5 00:10:27.836640 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 5 00:10:27.836647 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:10:27.836653 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:10:27.836661 kernel: rcu: RCU event tracing is enabled. Sep 5 00:10:27.836668 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:10:27.836674 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:10:27.836681 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:10:27.836688 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:10:27.836694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:10:27.836702 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 00:10:27.836709 kernel: GICv3: 256 SPIs implemented Sep 5 00:10:27.836716 kernel: GICv3: 0 Extended SPIs implemented Sep 5 00:10:27.836722 kernel: Root IRQ handler: gic_handle_irq Sep 5 00:10:27.836729 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 00:10:27.836735 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 00:10:27.836742 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 00:10:27.836749 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 00:10:27.836756 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 00:10:27.836763 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 5 00:10:27.836769 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 5 00:10:27.836776 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:10:27.836785 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:10:27.836792 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 00:10:27.836799 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 00:10:27.836806 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 00:10:27.836813 kernel: arm-pv: using stolen time PV Sep 5 00:10:27.836820 kernel: Console: colour dummy device 80x25 Sep 5 00:10:27.836827 kernel: ACPI: Core revision 20230628 Sep 5 00:10:27.836835 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 00:10:27.836842 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:10:27.836849 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:10:27.836857 kernel: landlock: Up and running. Sep 5 00:10:27.836864 kernel: SELinux: Initializing. Sep 5 00:10:27.836871 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:10:27.836878 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:10:27.836885 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:10:27.836892 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:10:27.836899 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:10:27.836906 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:10:27.836913 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 00:10:27.836921 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 00:10:27.836937 kernel: Remapping and enabling EFI services. Sep 5 00:10:27.836944 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:10:27.836951 kernel: Detected PIPT I-cache on CPU1 Sep 5 00:10:27.836958 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 00:10:27.836965 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 5 00:10:27.836972 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:10:27.836978 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 00:10:27.836985 kernel: Detected PIPT I-cache on CPU2 Sep 5 00:10:27.836992 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 00:10:27.837001 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 5 00:10:27.837008 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:10:27.837020 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 00:10:27.837028 kernel: Detected PIPT I-cache on CPU3 Sep 5 00:10:27.837036 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 00:10:27.837043 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 5 00:10:27.837050 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:10:27.837057 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 00:10:27.837064 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:10:27.837073 kernel: SMP: Total of 4 processors activated. Sep 5 00:10:27.837080 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 00:10:27.837088 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 00:10:27.837095 kernel: CPU features: detected: Common not Private translations Sep 5 00:10:27.837102 kernel: CPU features: detected: CRC32 instructions Sep 5 00:10:27.837109 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 00:10:27.837116 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 00:10:27.837124 kernel: CPU features: detected: LSE atomic instructions Sep 5 00:10:27.837132 kernel: CPU features: detected: Privileged Access Never Sep 5 00:10:27.837139 kernel: CPU features: detected: RAS Extension Support Sep 5 00:10:27.837146 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 00:10:27.837154 kernel: CPU: All CPU(s) started at EL1 Sep 5 00:10:27.837161 kernel: alternatives: applying system-wide alternatives Sep 5 00:10:27.837168 kernel: devtmpfs: initialized Sep 5 00:10:27.837175 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:10:27.837183 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:10:27.837190 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:10:27.837198 kernel: SMBIOS 3.0.0 present. Sep 5 00:10:27.837205 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 5 00:10:27.837212 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:10:27.837220 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 00:10:27.837227 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 00:10:27.837235 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 00:10:27.837242 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:10:27.837249 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 5 00:10:27.837256 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:10:27.837264 kernel: cpuidle: using governor menu Sep 5 00:10:27.837272 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 00:10:27.837284 kernel: ASID allocator initialised with 32768 entries Sep 5 00:10:27.837293 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:10:27.837300 kernel: Serial: AMBA PL011 UART driver Sep 5 00:10:27.837307 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 00:10:27.837314 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 00:10:27.837322 kernel: Modules: 509008 pages in range for PLT usage Sep 5 00:10:27.837329 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:10:27.837338 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:10:27.837346 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 00:10:27.837353 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 00:10:27.837360 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:10:27.837368 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:10:27.837375 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 00:10:27.837383 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 00:10:27.837390 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:10:27.837397 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:10:27.837405 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:10:27.837412 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:10:27.837419 kernel: ACPI: Interpreter enabled Sep 5 00:10:27.837426 kernel: ACPI: Using GIC for interrupt routing Sep 5 00:10:27.837433 kernel: ACPI: MCFG table detected, 1 entries Sep 5 00:10:27.837441 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 00:10:27.837448 kernel: printk: console [ttyAMA0] enabled Sep 5 00:10:27.837455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:10:27.837589 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:10:27.837667 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 00:10:27.837732 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 00:10:27.837795 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 00:10:27.837856 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 00:10:27.837866 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 00:10:27.837874 kernel: PCI host bridge to bus 0000:00 Sep 5 00:10:27.837976 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 00:10:27.838043 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 00:10:27.838101 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 00:10:27.838156 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:10:27.838233 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 00:10:27.838351 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:10:27.838422 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 5 00:10:27.838489 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 5 00:10:27.838553 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 00:10:27.838615 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 00:10:27.838679 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 5 00:10:27.838744 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 5 00:10:27.838807 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 00:10:27.838866 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 00:10:27.838936 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 00:10:27.838947 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 00:10:27.838954 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 00:10:27.838962 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 00:10:27.838969 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 00:10:27.838976 kernel: iommu: Default domain type: Translated Sep 5 00:10:27.838984 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 00:10:27.838991 kernel: efivars: Registered efivars operations Sep 5 00:10:27.838998 kernel: vgaarb: loaded Sep 5 00:10:27.839008 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 00:10:27.839015 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:10:27.839023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:10:27.839030 kernel: pnp: PnP ACPI init Sep 5 00:10:27.839106 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 00:10:27.839117 kernel: pnp: PnP ACPI: found 1 devices Sep 5 00:10:27.839124 kernel: NET: Registered PF_INET protocol family Sep 5 00:10:27.839131 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:10:27.839141 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:10:27.839148 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:10:27.839156 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:10:27.839163 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:10:27.839170 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:10:27.839177 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:10:27.839185 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:10:27.839192 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:10:27.839199 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:10:27.839208 kernel: kvm [1]: HYP mode not available Sep 5 00:10:27.839215 kernel: Initialise system trusted keyrings Sep 5 00:10:27.839222 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:10:27.839230 kernel: Key type asymmetric registered Sep 5 00:10:27.839237 kernel: Asymmetric key parser 'x509' registered Sep 5 00:10:27.839244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:10:27.839251 kernel: io scheduler mq-deadline registered Sep 5 00:10:27.839258 kernel: io scheduler kyber registered Sep 5 00:10:27.839266 kernel: io scheduler bfq registered Sep 5 00:10:27.839274 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 00:10:27.839291 kernel: ACPI: button: Power Button [PWRB] Sep 5 00:10:27.839299 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 00:10:27.839389 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 00:10:27.839401 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:10:27.839408 kernel: thunder_xcv, ver 1.0 Sep 5 00:10:27.839415 kernel: thunder_bgx, ver 1.0 Sep 5 00:10:27.839422 kernel: nicpf, ver 1.0 Sep 5 00:10:27.839430 kernel: nicvf, ver 1.0 Sep 5 00:10:27.839510 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 00:10:27.839575 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T00:10:27 UTC (1757031027) Sep 5 00:10:27.839585 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 00:10:27.839593 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 00:10:27.839601 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 00:10:27.839608 kernel: watchdog: Hard watchdog permanently disabled Sep 5 00:10:27.839615 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:10:27.839622 kernel: Segment Routing with IPv6 Sep 5 00:10:27.839632 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:10:27.839639 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:10:27.839646 kernel: Key type dns_resolver registered Sep 5 00:10:27.839654 kernel: registered taskstats version 1 Sep 5 00:10:27.839661 kernel: Loading compiled-in X.509 certificates Sep 5 00:10:27.839668 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: ff0f0c0ea2d5fe320cfcc368cee8225e09a20239' Sep 5 00:10:27.839675 kernel: Key type .fscrypt registered Sep 5 00:10:27.839682 kernel: Key type fscrypt-provisioning registered Sep 5 00:10:27.839693 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:10:27.839703 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:10:27.839710 kernel: ima: No architecture policies found Sep 5 00:10:27.839718 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 00:10:27.839725 kernel: clk: Disabling unused clocks Sep 5 00:10:27.839734 kernel: Freeing unused kernel memory: 39424K Sep 5 00:10:27.839741 kernel: Run /init as init process Sep 5 00:10:27.839748 kernel: with arguments: Sep 5 00:10:27.839755 kernel: /init Sep 5 00:10:27.839762 kernel: with environment: Sep 5 00:10:27.839770 kernel: HOME=/ Sep 5 00:10:27.839777 kernel: TERM=linux Sep 5 00:10:27.839784 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:10:27.839793 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:10:27.839803 systemd[1]: Detected virtualization kvm. Sep 5 00:10:27.839811 systemd[1]: Detected architecture arm64. Sep 5 00:10:27.839818 systemd[1]: Running in initrd. Sep 5 00:10:27.839826 systemd[1]: No hostname configured, using default hostname. Sep 5 00:10:27.839835 systemd[1]: Hostname set to . Sep 5 00:10:27.839843 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:10:27.839851 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:10:27.839859 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:10:27.839867 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:10:27.839876 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:10:27.839884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:10:27.839893 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:10:27.839901 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:10:27.839911 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:10:27.839919 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:10:27.839935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:10:27.839944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:10:27.839952 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:10:27.839961 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:10:27.839969 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:10:27.839977 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:10:27.839988 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:10:27.839996 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:10:27.840004 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:10:27.840011 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:10:27.840020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:10:27.840028 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:10:27.840037 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:10:27.840045 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:10:27.840053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:10:27.840061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:10:27.840069 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:10:27.840076 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:10:27.840084 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:10:27.840092 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:10:27.840102 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:10:27.840109 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:10:27.840117 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:10:27.840125 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:10:27.840134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:10:27.840160 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 00:10:27.840179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:10:27.840188 systemd-journald[237]: Journal started Sep 5 00:10:27.840208 systemd-journald[237]: Runtime Journal (/run/log/journal/08b616c62c6d4d57a7b510d7279887f1) is 5.9M, max 47.3M, 41.4M free. Sep 5 00:10:27.831748 systemd-modules-load[238]: Inserted module 'overlay' Sep 5 00:10:27.847103 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:10:27.847123 kernel: Bridge firewalling registered Sep 5 00:10:27.845840 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 5 00:10:27.849706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:10:27.851308 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:10:27.851692 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:10:27.853334 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:10:27.857457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:10:27.859564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:10:27.862983 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:10:27.864162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:10:27.866546 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:10:27.869262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:10:27.871630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:10:27.876356 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:10:27.881027 dracut-cmdline[271]: dracut-dracut-053 Sep 5 00:10:27.884903 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74b18a518d158648275add16e3ab4f37e237ff7b3b2938818abfe7ffe97d585a Sep 5 00:10:27.883987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:10:27.908456 systemd-resolved[283]: Positive Trust Anchors: Sep 5 00:10:27.908474 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:10:27.908508 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:10:27.913305 systemd-resolved[283]: Defaulting to hostname 'linux'. Sep 5 00:10:27.915180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:10:27.916161 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:10:27.945315 kernel: SCSI subsystem initialized Sep 5 00:10:27.950302 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:10:27.958310 kernel: iscsi: registered transport (tcp) Sep 5 00:10:27.970301 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:10:27.970319 kernel: QLogic iSCSI HBA Driver Sep 5 00:10:28.012371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:10:28.023424 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:10:28.038915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:10:28.039011 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:10:28.039037 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:10:28.084328 kernel: raid6: neonx8 gen() 15758 MB/s Sep 5 00:10:28.101319 kernel: raid6: neonx4 gen() 15676 MB/s Sep 5 00:10:28.118311 kernel: raid6: neonx2 gen() 13240 MB/s Sep 5 00:10:28.135312 kernel: raid6: neonx1 gen() 10483 MB/s Sep 5 00:10:28.152319 kernel: raid6: int64x8 gen() 6959 MB/s Sep 5 00:10:28.169307 kernel: raid6: int64x4 gen() 7350 MB/s Sep 5 00:10:28.186310 kernel: raid6: int64x2 gen() 6133 MB/s Sep 5 00:10:28.203303 kernel: raid6: int64x1 gen() 5061 MB/s Sep 5 00:10:28.203323 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Sep 5 00:10:28.220323 kernel: raid6: .... xor() 12058 MB/s, rmw enabled Sep 5 00:10:28.220367 kernel: raid6: using neon recovery algorithm Sep 5 00:10:28.225344 kernel: xor: measuring software checksum speed Sep 5 00:10:28.225364 kernel: 8regs : 19240 MB/sec Sep 5 00:10:28.226387 kernel: 32regs : 19669 MB/sec Sep 5 00:10:28.226400 kernel: arm64_neon : 26180 MB/sec Sep 5 00:10:28.226409 kernel: xor: using function: arm64_neon (26180 MB/sec) Sep 5 00:10:28.274323 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:10:28.285352 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:10:28.293411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:10:28.304505 systemd-udevd[459]: Using default interface naming scheme 'v255'. Sep 5 00:10:28.307577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:10:28.314424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:10:28.325558 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Sep 5 00:10:28.350941 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:10:28.370500 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:10:28.410361 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:10:28.421445 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:10:28.433746 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:10:28.435070 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:10:28.436443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:10:28.438205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:10:28.450369 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:10:28.459139 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:10:28.474011 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 00:10:28.474156 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:10:28.476542 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:10:28.476574 kernel: GPT:9289727 != 19775487 Sep 5 00:10:28.476584 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:10:28.476593 kernel: GPT:9289727 != 19775487 Sep 5 00:10:28.476602 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:10:28.477442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:10:28.480711 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:10:28.480840 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:10:28.483234 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:10:28.486915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:10:28.491247 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (518) Sep 5 00:10:28.487115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:10:28.490276 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:10:28.500293 kernel: BTRFS: device fsid 5d680510-9485-4285-abb3-c1615b7945ba devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (506) Sep 5 00:10:28.500541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:10:28.511689 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:10:28.514311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:10:28.522522 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:10:28.529630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:10:28.533418 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:10:28.534359 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:10:28.544446 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:10:28.545989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:10:28.564974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:10:28.715657 disk-uuid[551]: Primary Header is updated. Sep 5 00:10:28.715657 disk-uuid[551]: Secondary Entries is updated. Sep 5 00:10:28.715657 disk-uuid[551]: Secondary Header is updated. Sep 5 00:10:28.719330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:10:28.722464 kernel: GPT:disk_guids don't match. Sep 5 00:10:28.722503 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:10:28.722515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:10:28.726311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:10:28.726352 kernel: block device autoloading is deprecated and will be removed. Sep 5 00:10:29.726531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:10:29.726586 disk-uuid[560]: The operation has completed successfully. Sep 5 00:10:29.750345 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:10:29.750438 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:10:29.780461 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:10:29.783277 sh[575]: Success Sep 5 00:10:29.792299 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 00:10:29.825041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:10:29.842684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:10:29.844740 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:10:29.855780 kernel: BTRFS info (device dm-0): first mount of filesystem 5d680510-9485-4285-abb3-c1615b7945ba Sep 5 00:10:29.855818 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:10:29.855829 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:10:29.856619 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:10:29.857711 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:10:29.861628 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:10:29.862721 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:10:29.869466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:10:29.870999 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:10:29.879375 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:10:29.879417 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:10:29.879429 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:10:29.882585 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:10:29.889207 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:10:29.890592 kernel: BTRFS info (device vda6): last unmount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:10:29.897680 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:10:29.903442 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:10:29.965342 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:10:29.974461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:10:29.977212 ignition[668]: Ignition 2.19.0 Sep 5 00:10:29.977226 ignition[668]: Stage: fetch-offline Sep 5 00:10:29.977273 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:10:29.977301 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:10:29.977457 ignition[668]: parsed url from cmdline: "" Sep 5 00:10:29.977461 ignition[668]: no config URL provided Sep 5 00:10:29.977465 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:10:29.977475 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:10:29.977498 ignition[668]: op(1): [started] loading QEMU firmware config module Sep 5 00:10:29.977503 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:10:29.985965 ignition[668]: op(1): [finished] loading QEMU firmware config module Sep 5 00:10:29.995805 systemd-networkd[763]: lo: Link UP Sep 5 00:10:29.995819 systemd-networkd[763]: lo: Gained carrier Sep 5 00:10:29.996498 systemd-networkd[763]: Enumeration completed Sep 5 00:10:29.996774 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:10:29.996887 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:10:29.996890 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:10:29.998374 systemd-networkd[763]: eth0: Link UP Sep 5 00:10:29.998378 systemd-networkd[763]: eth0: Gained carrier Sep 5 00:10:29.998385 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:10:29.999455 systemd[1]: Reached target network.target - Network. Sep 5 00:10:30.019320 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:10:30.038406 ignition[668]: parsing config with SHA512: 94f692f67004377ca7d04fd930c7299624f97d351609bc8de9e7d044660ea2831db11cc2591ddb85f05a7662c443331c9dcb234145db78127fa6877645bf30bb Sep 5 00:10:30.044626 unknown[668]: fetched base config from "system" Sep 5 00:10:30.044635 unknown[668]: fetched user config from "qemu" Sep 5 00:10:30.045062 ignition[668]: fetch-offline: fetch-offline passed Sep 5 00:10:30.047846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:10:30.045130 ignition[668]: Ignition finished successfully Sep 5 00:10:30.049210 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:10:30.061424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:10:30.071305 ignition[769]: Ignition 2.19.0 Sep 5 00:10:30.071314 ignition[769]: Stage: kargs Sep 5 00:10:30.071474 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:10:30.071483 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:10:30.072375 ignition[769]: kargs: kargs passed Sep 5 00:10:30.072420 ignition[769]: Ignition finished successfully Sep 5 00:10:30.074396 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:10:30.083447 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:10:30.092598 ignition[777]: Ignition 2.19.0 Sep 5 00:10:30.092607 ignition[777]: Stage: disks Sep 5 00:10:30.092767 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:10:30.092777 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:10:30.093648 ignition[777]: disks: disks passed Sep 5 00:10:30.093693 ignition[777]: Ignition finished successfully Sep 5 00:10:30.096339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:10:30.098010 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:10:30.098908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:10:30.100509 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:10:30.102048 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:10:30.103394 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:10:30.114416 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:10:30.124318 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:10:30.128092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:10:30.142383 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:10:30.181302 kernel: EXT4-fs (vda9): mounted filesystem a958ad86-437c-4ed7-b041-6695bea80f66 r/w with ordered data mode. Quota mode: none. Sep 5 00:10:30.181937 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:10:30.183012 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:10:30.194360 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:10:30.196557 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:10:30.197358 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:10:30.197396 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:10:30.197416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:10:30.205309 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (796) Sep 5 00:10:30.205336 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:10:30.205456 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:10:30.209694 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:10:30.209718 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:10:30.209729 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:10:30.210424 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:10:30.222412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:10:30.252603 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:10:30.256291 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:10:30.259274 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:10:30.262709 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:10:30.327390 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:10:30.339405 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:10:30.340735 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:10:30.345291 kernel: BTRFS info (device vda6): last unmount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:10:30.360691 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:10:30.362559 ignition[910]: INFO : Ignition 2.19.0 Sep 5 00:10:30.362559 ignition[910]: INFO : Stage: mount Sep 5 00:10:30.364437 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:10:30.364437 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:10:30.364437 ignition[910]: INFO : mount: mount passed Sep 5 00:10:30.364437 ignition[910]: INFO : Ignition finished successfully Sep 5 00:10:30.364925 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:10:30.374388 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:10:30.855082 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:10:30.868473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:10:30.876006 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (922) Sep 5 00:10:30.876045 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:10:30.876062 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:10:30.877318 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:10:30.879305 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:10:30.880818 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:10:30.903170 ignition[939]: INFO : Ignition 2.19.0 Sep 5 00:10:30.903170 ignition[939]: INFO : Stage: files Sep 5 00:10:30.905096 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:10:30.905096 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:10:30.905096 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:10:30.905096 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:10:30.910150 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:10:30.910150 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:10:30.913724 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:10:30.913724 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:10:30.913724 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 00:10:30.913724 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 00:10:30.910658 unknown[939]: wrote ssh authorized keys file for user: core Sep 5 00:10:30.979204 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:10:31.168852 systemd-networkd[763]: eth0: Gained IPv6LL Sep 5 00:10:31.271201 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:10:31.273092 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 00:10:31.964305 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:10:32.694190 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:10:32.694190 ignition[939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:10:32.697211 ignition[939]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:10:32.714233 ignition[939]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:10:32.718466 ignition[939]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:10:32.718466 ignition[939]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:10:32.718466 ignition[939]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:10:32.718466 ignition[939]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:10:32.718466 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:10:32.718466 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:10:32.718466 ignition[939]: INFO : files: files passed Sep 5 00:10:32.718466 ignition[939]: INFO : Ignition finished successfully Sep 5 00:10:32.719956 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:10:32.728478 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:10:32.730022 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:10:32.732621 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:10:32.734133 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:10:32.737557 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:10:32.740348 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:10:32.740348 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:10:32.743764 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:10:32.745301 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:10:32.746626 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:10:32.757431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:10:32.777413 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:10:32.778206 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:10:32.779422 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:10:32.781213 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:10:32.782853 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:10:32.787420 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:10:32.801958 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:10:32.817460 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:10:32.825119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:10:32.827172 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:10:32.828246 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:10:32.829118 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:10:32.829237 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:10:32.831156 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:10:32.832656 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:10:32.833973 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:10:32.835519 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:10:32.837006 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:10:32.838339 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:10:32.839842 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:10:32.841438 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:10:32.842993 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:10:32.844337 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:10:32.845772 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:10:32.845890 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:10:32.847800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:10:32.849218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:10:32.850679 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:10:32.851378 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:10:32.852949 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:10:32.853062 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:10:32.855047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:10:32.855229 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:10:32.856943 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:10:32.858050 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:10:32.858150 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:10:32.859790 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:10:32.861041 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:10:32.862413 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:10:32.862498 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:10:32.863664 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:10:32.863740 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:10:32.865117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:10:32.865219 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:10:32.867039 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:10:32.867133 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:10:32.880475 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:10:32.881176 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:10:32.881312 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:10:32.886503 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:10:32.887192 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:10:32.887338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:10:32.888794 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:10:32.888892 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:10:32.895458 ignition[994]: INFO : Ignition 2.19.0 Sep 5 00:10:32.895458 ignition[994]: INFO : Stage: umount Sep 5 00:10:32.896725 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:10:32.896725 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:10:32.896725 ignition[994]: INFO : umount: umount passed Sep 5 00:10:32.896725 ignition[994]: INFO : Ignition finished successfully Sep 5 00:10:32.896196 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:10:32.896275 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:10:32.899265 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:10:32.899352 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:10:32.901547 systemd[1]: Stopped target network.target - Network. Sep 5 00:10:32.902276 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:10:32.902347 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:10:32.903996 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:10:32.904039 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:10:32.905312 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:10:32.905351 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:10:32.907143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:10:32.907186 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:10:32.908705 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:10:32.910062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:10:32.912153 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:10:32.920362 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:10:32.920490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:10:32.921327 systemd-networkd[763]: eth0: DHCPv6 lease lost Sep 5 00:10:32.922814 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:10:32.922864 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:10:32.924486 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:10:32.924609 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:10:32.927941 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:10:32.927987 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:10:32.940410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:10:32.941114 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:10:32.941169 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:10:32.942801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:10:32.942837 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:10:32.944156 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:10:32.944191 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:10:32.946133 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:10:32.954670 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:10:32.954784 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:10:32.967979 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:10:32.968126 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:10:32.969923 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:10:32.969997 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:10:32.971640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:10:32.971696 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:10:32.972590 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:10:32.972625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:10:32.973889 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:10:32.973943 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:10:32.976006 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:10:32.976043 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:10:32.978049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:10:32.978090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:10:32.980344 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:10:32.980387 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:10:32.995441 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:10:32.996231 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:10:32.996306 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:10:32.998072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:10:32.998112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:10:33.001174 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:10:33.001265 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:10:33.003127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:10:33.004723 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:10:33.013785 systemd[1]: Switching root. Sep 5 00:10:33.041113 systemd-journald[237]: Journal stopped Sep 5 00:10:33.765809 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 00:10:33.765867 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:10:33.765885 kernel: SELinux: policy capability open_perms=1 Sep 5 00:10:33.765905 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:10:33.765916 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:10:33.765930 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:10:33.765941 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:10:33.765951 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:10:33.765960 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:10:33.765970 kernel: audit: type=1403 audit(1757031033.205:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:10:33.765983 systemd[1]: Successfully loaded SELinux policy in 31.971ms. Sep 5 00:10:33.765998 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.133ms. Sep 5 00:10:33.766010 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:10:33.766022 systemd[1]: Detected virtualization kvm. Sep 5 00:10:33.766032 systemd[1]: Detected architecture arm64. Sep 5 00:10:33.766043 systemd[1]: Detected first boot. Sep 5 00:10:33.766053 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:10:33.766064 zram_generator::config[1045]: No configuration found. Sep 5 00:10:33.766076 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:10:33.766088 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:10:33.766099 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:10:33.766110 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:10:33.766122 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:10:33.766133 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:10:33.766144 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:10:33.766155 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:10:33.766166 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:10:33.766178 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:10:33.766189 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:10:33.766200 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:10:33.766211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:10:33.766222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:10:33.766233 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:10:33.766244 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:10:33.766255 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:10:33.766266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:10:33.766296 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 00:10:33.766309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:10:33.766322 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:10:33.766333 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:10:33.766344 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:10:33.766357 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:10:33.766368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:10:33.766380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:10:33.766391 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:10:33.766403 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:10:33.766418 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:10:33.766428 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:10:33.766439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:10:33.766451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:10:33.766462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:10:33.766473 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:10:33.766484 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:10:33.766496 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:10:33.766507 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:10:33.766518 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:10:33.766529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:10:33.766539 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:10:33.766551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:10:33.766563 systemd[1]: Reached target machines.target - Containers. Sep 5 00:10:33.766574 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:10:33.766587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:10:33.766598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:10:33.766609 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:10:33.766620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:10:33.766631 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:10:33.766642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:10:33.766653 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:10:33.766664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:10:33.766675 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:10:33.766688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:10:33.766699 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:10:33.766709 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:10:33.766720 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:10:33.766731 kernel: fuse: init (API version 7.39) Sep 5 00:10:33.766741 kernel: loop: module loaded Sep 5 00:10:33.766751 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:10:33.766762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:10:33.766772 kernel: ACPI: bus type drm_connector registered Sep 5 00:10:33.766784 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:10:33.766796 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:10:33.766808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:10:33.766818 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:10:33.766829 systemd[1]: Stopped verity-setup.service. Sep 5 00:10:33.766840 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:10:33.766850 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:10:33.766879 systemd-journald[1109]: Collecting audit messages is disabled. Sep 5 00:10:33.766907 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:10:33.766918 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:10:33.766930 systemd-journald[1109]: Journal started Sep 5 00:10:33.766953 systemd-journald[1109]: Runtime Journal (/run/log/journal/08b616c62c6d4d57a7b510d7279887f1) is 5.9M, max 47.3M, 41.4M free. Sep 5 00:10:33.573889 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:10:33.590062 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:10:33.590433 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:10:33.769694 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:10:33.770366 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:10:33.771363 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:10:33.774338 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:10:33.775511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:10:33.776821 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:10:33.778350 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:10:33.779564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:10:33.779705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:10:33.781071 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:10:33.781248 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:10:33.782488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:10:33.784333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:10:33.785581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:10:33.785721 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:10:33.786996 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:10:33.787146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:10:33.788421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:10:33.789580 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:10:33.790885 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:10:33.803006 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:10:33.813414 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:10:33.815524 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:10:33.816428 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:10:33.816472 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:10:33.818237 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:10:33.820320 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:10:33.822415 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:10:33.823332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:10:33.825169 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:10:33.826916 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:10:33.827991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:10:33.829444 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:10:33.830316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:10:33.831156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:10:33.835114 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:10:33.835871 systemd-journald[1109]: Time spent on flushing to /var/log/journal/08b616c62c6d4d57a7b510d7279887f1 is 20.724ms for 857 entries. Sep 5 00:10:33.835871 systemd-journald[1109]: System Journal (/var/log/journal/08b616c62c6d4d57a7b510d7279887f1) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:10:33.861904 systemd-journald[1109]: Received client request to flush runtime journal. Sep 5 00:10:33.861943 kernel: loop0: detected capacity change from 0 to 114328 Sep 5 00:10:33.840584 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:10:33.844371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:10:33.845634 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:10:33.846862 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:10:33.848358 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:10:33.849696 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:10:33.853695 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:10:33.856444 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:10:33.858639 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:10:33.866048 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:10:33.873027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:10:33.879326 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:10:33.885032 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:10:33.894489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:10:33.898292 kernel: loop1: detected capacity change from 0 to 211168 Sep 5 00:10:33.898819 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:10:33.900295 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 00:10:33.902713 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:10:33.919420 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 5 00:10:33.919436 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 5 00:10:33.925395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:10:33.936332 kernel: loop2: detected capacity change from 0 to 114432 Sep 5 00:10:33.974326 kernel: loop3: detected capacity change from 0 to 114328 Sep 5 00:10:33.979313 kernel: loop4: detected capacity change from 0 to 211168 Sep 5 00:10:33.984326 kernel: loop5: detected capacity change from 0 to 114432 Sep 5 00:10:33.987187 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:10:33.987622 (sd-merge)[1181]: Merged extensions into '/usr'. Sep 5 00:10:33.992341 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:10:33.992359 systemd[1]: Reloading... Sep 5 00:10:34.040347 zram_generator::config[1205]: No configuration found. Sep 5 00:10:34.115390 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:10:34.154227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:10:34.190139 systemd[1]: Reloading finished in 197 ms. Sep 5 00:10:34.219948 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:10:34.221199 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:10:34.233503 systemd[1]: Starting ensure-sysext.service... Sep 5 00:10:34.235265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:10:34.241816 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:10:34.241831 systemd[1]: Reloading... Sep 5 00:10:34.260855 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:10:34.261480 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:10:34.262223 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:10:34.262551 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 5 00:10:34.262679 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 5 00:10:34.266443 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:10:34.266568 systemd-tmpfiles[1242]: Skipping /boot Sep 5 00:10:34.277003 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:10:34.277120 systemd-tmpfiles[1242]: Skipping /boot Sep 5 00:10:34.282307 zram_generator::config[1269]: No configuration found. Sep 5 00:10:34.366576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:10:34.402171 systemd[1]: Reloading finished in 160 ms. Sep 5 00:10:34.417243 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:10:34.429695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:10:34.438631 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:10:34.440945 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:10:34.443258 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:10:34.450368 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:10:34.452810 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:10:34.456489 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:10:34.459564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:10:34.463559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:10:34.475830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:10:34.478004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:10:34.478933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:10:34.479423 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Sep 5 00:10:34.485360 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:10:34.487239 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:10:34.488997 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:10:34.490799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:10:34.492303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:10:34.493751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:10:34.496445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:10:34.496804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:10:34.496866 augenrules[1330]: No rules Sep 5 00:10:34.500344 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:10:34.501655 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:10:34.501779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:10:34.519763 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:10:34.526987 systemd[1]: Finished ensure-sysext.service. Sep 5 00:10:34.528403 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:10:34.533750 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 00:10:34.534036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:10:34.541561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:10:34.545341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1350) Sep 5 00:10:34.545481 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:10:34.549487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:10:34.555464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:10:34.556377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:10:34.558021 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:10:34.573491 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:10:34.581481 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:10:34.582308 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:10:34.582848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:10:34.582999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:10:34.584189 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:10:34.584351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:10:34.585438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:10:34.585562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:10:34.588085 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:10:34.588212 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:10:34.603164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:10:34.615598 systemd-resolved[1310]: Positive Trust Anchors: Sep 5 00:10:34.615755 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:10:34.615788 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:10:34.620443 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:10:34.621364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:10:34.621424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:10:34.622068 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:10:34.623663 systemd-resolved[1310]: Defaulting to hostname 'linux'. Sep 5 00:10:34.625467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:10:34.626536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:10:34.639378 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:10:34.649722 systemd-networkd[1373]: lo: Link UP Sep 5 00:10:34.649729 systemd-networkd[1373]: lo: Gained carrier Sep 5 00:10:34.650637 systemd-networkd[1373]: Enumeration completed Sep 5 00:10:34.650743 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:10:34.651818 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:10:34.651828 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:10:34.652436 systemd[1]: Reached target network.target - Network. Sep 5 00:10:34.652813 systemd-networkd[1373]: eth0: Link UP Sep 5 00:10:34.652819 systemd-networkd[1373]: eth0: Gained carrier Sep 5 00:10:34.652917 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:10:34.668159 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:10:34.669580 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:10:34.676064 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:10:34.678014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:10:34.678870 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:10:34.679540 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 5 00:10:34.680834 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:10:34.680884 systemd-timesyncd[1374]: Initial clock synchronization to Fri 2025-09-05 00:10:34.415256 UTC. Sep 5 00:10:34.682151 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:10:34.684622 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:10:34.697348 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:10:34.720008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:10:34.734792 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:10:34.736036 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:10:34.737016 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:10:34.737996 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:10:34.739023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:10:34.740342 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:10:34.741218 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:10:34.742258 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:10:34.743171 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:10:34.743203 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:10:34.744004 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:10:34.745531 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:10:34.747724 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:10:34.756160 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:10:34.758196 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:10:34.759605 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:10:34.760549 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:10:34.761316 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:10:34.762028 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:10:34.762058 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:10:34.762983 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:10:34.764883 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:10:34.766138 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:10:34.769061 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:10:34.771717 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:10:34.772679 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:10:34.776130 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:10:34.779062 jq[1408]: false Sep 5 00:10:34.779102 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:10:34.782320 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:10:34.785978 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:10:34.791959 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:10:34.793551 extend-filesystems[1409]: Found loop3 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found loop4 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found loop5 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda1 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda2 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda3 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found usr Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda4 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda6 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda7 Sep 5 00:10:34.793551 extend-filesystems[1409]: Found vda9 Sep 5 00:10:34.793551 extend-filesystems[1409]: Checking size of /dev/vda9 Sep 5 00:10:34.821275 extend-filesystems[1409]: Resized partition /dev/vda9 Sep 5 00:10:34.793555 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:10:34.823357 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:10:34.802075 dbus-daemon[1407]: [system] SELinux support is enabled Sep 5 00:10:34.823633 extend-filesystems[1430]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:10:34.794368 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:10:34.795501 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:10:34.802536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:10:34.829833 jq[1426]: true Sep 5 00:10:34.806867 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:10:34.812667 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:10:34.823351 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:10:34.823497 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:10:34.823743 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:10:34.823875 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:10:34.835620 update_engine[1422]: I20250905 00:10:34.835134 1422 main.cc:92] Flatcar Update Engine starting Sep 5 00:10:34.837458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1346) Sep 5 00:10:34.837434 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:10:34.837624 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:10:34.839789 update_engine[1422]: I20250905 00:10:34.839728 1422 update_check_scheduler.cc:74] Next update check in 6m12s Sep 5 00:10:34.856854 (ntainerd)[1435]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:10:34.859309 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:10:34.859876 jq[1434]: true Sep 5 00:10:34.860979 tar[1432]: linux-arm64/LICENSE Sep 5 00:10:34.867080 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:10:34.871463 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:10:34.871495 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:10:34.872696 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:10:34.872713 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:10:34.875583 tar[1432]: linux-arm64/helm Sep 5 00:10:34.875608 extend-filesystems[1430]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:10:34.875608 extend-filesystems[1430]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:10:34.875608 extend-filesystems[1430]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:10:34.875205 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:10:34.883503 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Sep 5 00:10:34.876490 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:10:34.876667 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:10:34.878648 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 00:10:34.879690 systemd-logind[1417]: New seat seat0. Sep 5 00:10:34.883720 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:10:34.902197 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:10:34.903494 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:10:34.905785 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:10:34.930487 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:10:35.005924 containerd[1435]: time="2025-09-05T00:10:35.005833946Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:10:35.030729 containerd[1435]: time="2025-09-05T00:10:35.030687624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032093 containerd[1435]: time="2025-09-05T00:10:35.032058410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032093 containerd[1435]: time="2025-09-05T00:10:35.032091630Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:10:35.032138 containerd[1435]: time="2025-09-05T00:10:35.032105861Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:10:35.032290 containerd[1435]: time="2025-09-05T00:10:35.032267358Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:10:35.032322 containerd[1435]: time="2025-09-05T00:10:35.032302279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032383 containerd[1435]: time="2025-09-05T00:10:35.032363807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032407 containerd[1435]: time="2025-09-05T00:10:35.032381519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032555 containerd[1435]: time="2025-09-05T00:10:35.032532652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032580 containerd[1435]: time="2025-09-05T00:10:35.032553264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032580 containerd[1435]: time="2025-09-05T00:10:35.032566915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032580 containerd[1435]: time="2025-09-05T00:10:35.032576042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032660 containerd[1435]: time="2025-09-05T00:10:35.032644647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032854 containerd[1435]: time="2025-09-05T00:10:35.032835148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032947 containerd[1435]: time="2025-09-05T00:10:35.032928736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:10:35.032972 containerd[1435]: time="2025-09-05T00:10:35.032946602Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:10:35.033043 containerd[1435]: time="2025-09-05T00:10:35.033025958Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:10:35.033077 containerd[1435]: time="2025-09-05T00:10:35.033070122Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:10:35.037098 containerd[1435]: time="2025-09-05T00:10:35.037066117Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:10:35.037161 containerd[1435]: time="2025-09-05T00:10:35.037125943Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:10:35.037161 containerd[1435]: time="2025-09-05T00:10:35.037141644Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:10:35.037161 containerd[1435]: time="2025-09-05T00:10:35.037157152Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:10:35.037209 containerd[1435]: time="2025-09-05T00:10:35.037172234Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:10:35.037353 containerd[1435]: time="2025-09-05T00:10:35.037334156Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:10:35.037557 containerd[1435]: time="2025-09-05T00:10:35.037538695Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:10:35.037682 containerd[1435]: time="2025-09-05T00:10:35.037661906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:10:35.037707 containerd[1435]: time="2025-09-05T00:10:35.037685767Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:10:35.037707 containerd[1435]: time="2025-09-05T00:10:35.037700501Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:10:35.037741 containerd[1435]: time="2025-09-05T00:10:35.037713804Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037741 containerd[1435]: time="2025-09-05T00:10:35.037727030Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037741 containerd[1435]: time="2025-09-05T00:10:35.037739560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037796 containerd[1435]: time="2025-09-05T00:10:35.037752941Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037796 containerd[1435]: time="2025-09-05T00:10:35.037767018Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037796 containerd[1435]: time="2025-09-05T00:10:35.037781327Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037796 containerd[1435]: time="2025-09-05T00:10:35.037793431Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037860 containerd[1435]: time="2025-09-05T00:10:35.037806270Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:10:35.037860 containerd[1435]: time="2025-09-05T00:10:35.037825607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037860 containerd[1435]: time="2025-09-05T00:10:35.037841965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037860 containerd[1435]: time="2025-09-05T00:10:35.037854302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037935 containerd[1435]: time="2025-09-05T00:10:35.037866213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037935 containerd[1435]: time="2025-09-05T00:10:35.037878472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037935 containerd[1435]: time="2025-09-05T00:10:35.037890925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037935 containerd[1435]: time="2025-09-05T00:10:35.037902836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037935 containerd[1435]: time="2025-09-05T00:10:35.037915520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.037935 containerd[1435]: time="2025-09-05T00:10:35.037933503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.038035 containerd[1435]: time="2025-09-05T00:10:35.037952220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.038035 containerd[1435]: time="2025-09-05T00:10:35.037964789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.038035 containerd[1435]: time="2025-09-05T00:10:35.037976468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.038035 containerd[1435]: time="2025-09-05T00:10:35.037994296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.038035 containerd[1435]: time="2025-09-05T00:10:35.038023881Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:10:35.038125 containerd[1435]: time="2025-09-05T00:10:35.038048670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.038125 containerd[1435]: time="2025-09-05T00:10:35.038061973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.038125 containerd[1435]: time="2025-09-05T00:10:35.038072879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:10:35.038856 containerd[1435]: time="2025-09-05T00:10:35.038818099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:10:35.039019 containerd[1435]: time="2025-09-05T00:10:35.038999705Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:10:35.039048 containerd[1435]: time="2025-09-05T00:10:35.039016527Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:10:35.039048 containerd[1435]: time="2025-09-05T00:10:35.039032538Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:10:35.039048 containerd[1435]: time="2025-09-05T00:10:35.039042709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.039098 containerd[1435]: time="2025-09-05T00:10:35.039056321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:10:35.039098 containerd[1435]: time="2025-09-05T00:10:35.039066531Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:10:35.039098 containerd[1435]: time="2025-09-05T00:10:35.039076972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:10:35.039414 containerd[1435]: time="2025-09-05T00:10:35.039359630Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:10:35.039516 containerd[1435]: time="2025-09-05T00:10:35.039421274Z" level=info msg="Connect containerd service" Sep 5 00:10:35.039516 containerd[1435]: time="2025-09-05T00:10:35.039447146Z" level=info msg="using legacy CRI server" Sep 5 00:10:35.039516 containerd[1435]: time="2025-09-05T00:10:35.039453720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:10:35.039589 containerd[1435]: time="2025-09-05T00:10:35.039533656Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:10:35.040242 containerd[1435]: time="2025-09-05T00:10:35.040212591Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:10:35.040484 containerd[1435]: time="2025-09-05T00:10:35.040441765Z" level=info msg="Start subscribing containerd event" Sep 5 00:10:35.040514 containerd[1435]: time="2025-09-05T00:10:35.040500006Z" level=info msg="Start recovering state" Sep 5 00:10:35.040594 containerd[1435]: time="2025-09-05T00:10:35.040576964Z" level=info msg="Start event monitor" Sep 5 00:10:35.040617 containerd[1435]: time="2025-09-05T00:10:35.040594135Z" level=info msg="Start snapshots syncer" Sep 5 00:10:35.040617 containerd[1435]: time="2025-09-05T00:10:35.040603841Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:10:35.040617 containerd[1435]: time="2025-09-05T00:10:35.040611228Z" level=info msg="Start streaming server" Sep 5 00:10:35.041036 containerd[1435]: time="2025-09-05T00:10:35.041011334Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:10:35.041091 containerd[1435]: time="2025-09-05T00:10:35.041064238Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:10:35.041145 containerd[1435]: time="2025-09-05T00:10:35.041131683Z" level=info msg="containerd successfully booted in 0.037924s" Sep 5 00:10:35.042200 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:10:35.143554 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:10:35.163323 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:10:35.171584 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:10:35.176967 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:10:35.177315 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:10:35.181692 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:10:35.194387 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:10:35.204557 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:10:35.206537 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 00:10:35.207544 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:10:35.244331 tar[1432]: linux-arm64/README.md Sep 5 00:10:35.257674 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:10:36.288485 systemd-networkd[1373]: eth0: Gained IPv6LL Sep 5 00:10:36.291340 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:10:36.293576 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:10:36.307700 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:10:36.310797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:36.313177 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:10:36.342694 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:10:36.344467 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:10:36.344661 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:10:36.346847 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:10:36.927625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:36.929469 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:10:36.931427 systemd[1]: Startup finished in 540ms (kernel) + 5.527s (initrd) + 3.757s (userspace) = 9.826s. Sep 5 00:10:36.931482 (kubelet)[1519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:10:37.286641 kubelet[1519]: E0905 00:10:37.286529 1519 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:10:37.289196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:10:37.289352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:10:40.431128 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:10:40.432570 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:45236.service - OpenSSH per-connection server daemon (10.0.0.1:45236). Sep 5 00:10:40.475159 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 45236 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:10:40.476849 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:40.484034 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:10:40.494668 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:10:40.496200 systemd-logind[1417]: New session 1 of user core. Sep 5 00:10:40.503696 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:10:40.505888 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:10:40.512239 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:10:40.582747 systemd[1536]: Queued start job for default target default.target. Sep 5 00:10:40.591184 systemd[1536]: Created slice app.slice - User Application Slice. Sep 5 00:10:40.591212 systemd[1536]: Reached target paths.target - Paths. Sep 5 00:10:40.591224 systemd[1536]: Reached target timers.target - Timers. Sep 5 00:10:40.592475 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:10:40.601848 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:10:40.601907 systemd[1536]: Reached target sockets.target - Sockets. Sep 5 00:10:40.601919 systemd[1536]: Reached target basic.target - Basic System. Sep 5 00:10:40.601952 systemd[1536]: Reached target default.target - Main User Target. Sep 5 00:10:40.601976 systemd[1536]: Startup finished in 84ms. Sep 5 00:10:40.602260 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:10:40.603688 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:10:40.661998 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:45238.service - OpenSSH per-connection server daemon (10.0.0.1:45238). Sep 5 00:10:40.695324 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 45238 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:10:40.696559 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:40.700141 systemd-logind[1417]: New session 2 of user core. Sep 5 00:10:40.708457 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:10:40.759393 sshd[1547]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:40.770600 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:45238.service: Deactivated successfully. Sep 5 00:10:40.773578 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:10:40.775445 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:10:40.775829 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:45246.service - OpenSSH per-connection server daemon (10.0.0.1:45246). Sep 5 00:10:40.776739 systemd-logind[1417]: Removed session 2. Sep 5 00:10:40.807750 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 45246 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:10:40.808897 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:40.812648 systemd-logind[1417]: New session 3 of user core. Sep 5 00:10:40.819427 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:10:40.866573 sshd[1554]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:40.875614 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:45246.service: Deactivated successfully. Sep 5 00:10:40.876983 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:10:40.878166 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:10:40.879247 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:45256.service - OpenSSH per-connection server daemon (10.0.0.1:45256). Sep 5 00:10:40.880648 systemd-logind[1417]: Removed session 3. Sep 5 00:10:40.910950 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 45256 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:10:40.912166 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:40.915762 systemd-logind[1417]: New session 4 of user core. Sep 5 00:10:40.928438 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:10:40.980577 sshd[1561]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:40.989472 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:45256.service: Deactivated successfully. Sep 5 00:10:40.990833 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:10:40.992294 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:10:40.993378 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:45264.service - OpenSSH per-connection server daemon (10.0.0.1:45264). Sep 5 00:10:40.994038 systemd-logind[1417]: Removed session 4. Sep 5 00:10:41.027016 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 45264 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:10:41.028312 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:41.032113 systemd-logind[1417]: New session 5 of user core. Sep 5 00:10:41.045423 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:10:41.101550 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:10:41.101847 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:10:41.117014 sudo[1571]: pam_unix(sudo:session): session closed for user root Sep 5 00:10:41.118783 sshd[1568]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:41.128664 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:45264.service: Deactivated successfully. Sep 5 00:10:41.130536 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:10:41.131805 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:10:41.136521 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:45270.service - OpenSSH per-connection server daemon (10.0.0.1:45270). Sep 5 00:10:41.137451 systemd-logind[1417]: Removed session 5. Sep 5 00:10:41.165369 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 45270 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:10:41.166687 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:41.170600 systemd-logind[1417]: New session 6 of user core. Sep 5 00:10:41.181449 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:10:41.231537 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:10:41.231828 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:10:41.234830 sudo[1580]: pam_unix(sudo:session): session closed for user root Sep 5 00:10:41.239193 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:10:41.239758 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:10:41.262544 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:10:41.263780 auditctl[1583]: No rules Sep 5 00:10:41.264594 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:10:41.266323 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:10:41.267917 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:10:41.290409 augenrules[1601]: No rules Sep 5 00:10:41.291656 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:10:41.292945 sudo[1579]: pam_unix(sudo:session): session closed for user root Sep 5 00:10:41.294762 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:41.304490 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:45270.service: Deactivated successfully. Sep 5 00:10:41.306840 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:10:41.308321 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:10:41.310009 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:45276.service - OpenSSH per-connection server daemon (10.0.0.1:45276). Sep 5 00:10:41.311142 systemd-logind[1417]: Removed session 6. Sep 5 00:10:41.343704 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 45276 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:10:41.344953 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:41.349818 systemd-logind[1417]: New session 7 of user core. Sep 5 00:10:41.358435 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:10:41.408511 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:10:41.408788 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:10:41.661546 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:10:41.661607 (dockerd)[1630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:10:41.868123 dockerd[1630]: time="2025-09-05T00:10:41.868064889Z" level=info msg="Starting up" Sep 5 00:10:42.016664 dockerd[1630]: time="2025-09-05T00:10:42.016562389Z" level=info msg="Loading containers: start." Sep 5 00:10:42.104299 kernel: Initializing XFRM netlink socket Sep 5 00:10:42.163685 systemd-networkd[1373]: docker0: Link UP Sep 5 00:10:42.181490 dockerd[1630]: time="2025-09-05T00:10:42.181430297Z" level=info msg="Loading containers: done." Sep 5 00:10:42.192700 dockerd[1630]: time="2025-09-05T00:10:42.192645764Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:10:42.192847 dockerd[1630]: time="2025-09-05T00:10:42.192760963Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:10:42.192874 dockerd[1630]: time="2025-09-05T00:10:42.192862621Z" level=info msg="Daemon has completed initialization" Sep 5 00:10:42.224007 dockerd[1630]: time="2025-09-05T00:10:42.223883609Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:10:42.224095 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:10:42.850446 containerd[1435]: time="2025-09-05T00:10:42.850406270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:10:43.576803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184750995.mount: Deactivated successfully. Sep 5 00:10:45.190324 containerd[1435]: time="2025-09-05T00:10:45.190205677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:45.190879 containerd[1435]: time="2025-09-05T00:10:45.190842389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 5 00:10:45.191548 containerd[1435]: time="2025-09-05T00:10:45.191522162Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:45.196309 containerd[1435]: time="2025-09-05T00:10:45.194960479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:45.196469 containerd[1435]: time="2025-09-05T00:10:45.196440801Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.345990351s" Sep 5 00:10:45.196543 containerd[1435]: time="2025-09-05T00:10:45.196528430Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 5 00:10:45.197998 containerd[1435]: time="2025-09-05T00:10:45.197955580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:10:46.901306 containerd[1435]: time="2025-09-05T00:10:46.898332717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:46.901306 containerd[1435]: time="2025-09-05T00:10:46.898753796Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 5 00:10:46.901306 containerd[1435]: time="2025-09-05T00:10:46.899631082Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:46.903149 containerd[1435]: time="2025-09-05T00:10:46.903113316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:46.905323 containerd[1435]: time="2025-09-05T00:10:46.905076602Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.706899246s" Sep 5 00:10:46.905323 containerd[1435]: time="2025-09-05T00:10:46.905118043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 5 00:10:46.905614 containerd[1435]: time="2025-09-05T00:10:46.905590725Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:10:47.539594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:10:47.554473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:47.671265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:47.674664 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:10:47.708511 kubelet[1846]: E0905 00:10:47.708464 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:10:47.711991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:10:47.712239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:10:48.414635 containerd[1435]: time="2025-09-05T00:10:48.414586187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:48.416323 containerd[1435]: time="2025-09-05T00:10:48.416270399Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 5 00:10:48.417507 containerd[1435]: time="2025-09-05T00:10:48.417397051Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:48.421009 containerd[1435]: time="2025-09-05T00:10:48.420965775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:48.422087 containerd[1435]: time="2025-09-05T00:10:48.422058705Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.51643597s" Sep 5 00:10:48.422136 containerd[1435]: time="2025-09-05T00:10:48.422094136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 5 00:10:48.422614 containerd[1435]: time="2025-09-05T00:10:48.422589701Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:10:49.515837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449922731.mount: Deactivated successfully. Sep 5 00:10:49.993637 containerd[1435]: time="2025-09-05T00:10:49.993513300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:49.994071 containerd[1435]: time="2025-09-05T00:10:49.994036608Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 5 00:10:49.997273 containerd[1435]: time="2025-09-05T00:10:49.997098060Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:49.999090 containerd[1435]: time="2025-09-05T00:10:49.999038757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:49.999815 containerd[1435]: time="2025-09-05T00:10:49.999783884Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.57716447s" Sep 5 00:10:49.999864 containerd[1435]: time="2025-09-05T00:10:49.999821092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 5 00:10:50.000757 containerd[1435]: time="2025-09-05T00:10:50.000572217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:10:50.489243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919422431.mount: Deactivated successfully. Sep 5 00:10:51.422658 containerd[1435]: time="2025-09-05T00:10:51.422598703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:51.423135 containerd[1435]: time="2025-09-05T00:10:51.423101645Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 5 00:10:51.424125 containerd[1435]: time="2025-09-05T00:10:51.424098284Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:51.427233 containerd[1435]: time="2025-09-05T00:10:51.427196975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:51.429599 containerd[1435]: time="2025-09-05T00:10:51.429474974Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.428851821s" Sep 5 00:10:51.429599 containerd[1435]: time="2025-09-05T00:10:51.429509598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 5 00:10:51.430119 containerd[1435]: time="2025-09-05T00:10:51.430095414Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:10:51.846838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611555067.mount: Deactivated successfully. Sep 5 00:10:51.850221 containerd[1435]: time="2025-09-05T00:10:51.850181138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:51.851241 containerd[1435]: time="2025-09-05T00:10:51.851210887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 00:10:51.852233 containerd[1435]: time="2025-09-05T00:10:51.852191111Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:51.854536 containerd[1435]: time="2025-09-05T00:10:51.854125183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:51.855386 containerd[1435]: time="2025-09-05T00:10:51.855064887Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 424.93716ms" Sep 5 00:10:51.855386 containerd[1435]: time="2025-09-05T00:10:51.855098235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 00:10:51.855693 containerd[1435]: time="2025-09-05T00:10:51.855669110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:10:52.306081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780122298.mount: Deactivated successfully. Sep 5 00:10:54.666405 containerd[1435]: time="2025-09-05T00:10:54.666347972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:54.667671 containerd[1435]: time="2025-09-05T00:10:54.667399046Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 5 00:10:54.668879 containerd[1435]: time="2025-09-05T00:10:54.668832832Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:54.673208 containerd[1435]: time="2025-09-05T00:10:54.671755261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:54.673383 containerd[1435]: time="2025-09-05T00:10:54.673354133Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.817575401s" Sep 5 00:10:54.673569 containerd[1435]: time="2025-09-05T00:10:54.673457182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 5 00:10:57.962434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:10:57.976532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:58.071140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:58.074534 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:10:58.106375 kubelet[2010]: E0905 00:10:58.106325 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:10:58.108981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:10:58.109124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:10:59.304885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:59.315639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:59.336454 systemd[1]: Reloading requested from client PID 2025 ('systemctl') (unit session-7.scope)... Sep 5 00:10:59.336468 systemd[1]: Reloading... Sep 5 00:10:59.410353 zram_generator::config[2070]: No configuration found. Sep 5 00:10:59.594295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:10:59.648075 systemd[1]: Reloading finished in 311 ms. Sep 5 00:10:59.698164 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:10:59.698238 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:10:59.698477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:59.700064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:59.804349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:59.808306 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:10:59.839540 kubelet[2109]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:10:59.839540 kubelet[2109]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:10:59.839540 kubelet[2109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:10:59.839887 kubelet[2109]: I0905 00:10:59.839586 2109 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:11:01.469395 kubelet[2109]: I0905 00:11:01.469358 2109 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:11:01.470311 kubelet[2109]: I0905 00:11:01.469758 2109 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:11:01.470311 kubelet[2109]: I0905 00:11:01.470003 2109 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:11:01.484843 kubelet[2109]: E0905 00:11:01.484806 2109 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:11:01.485761 kubelet[2109]: I0905 00:11:01.485729 2109 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:11:01.491057 kubelet[2109]: E0905 00:11:01.491014 2109 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:11:01.491057 kubelet[2109]: I0905 00:11:01.491048 2109 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:11:01.493503 kubelet[2109]: I0905 00:11:01.493482 2109 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:11:01.494503 kubelet[2109]: I0905 00:11:01.494447 2109 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:11:01.494630 kubelet[2109]: I0905 00:11:01.494491 2109 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:11:01.494713 kubelet[2109]: I0905 00:11:01.494698 2109 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:11:01.494713 kubelet[2109]: I0905 00:11:01.494707 2109 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:11:01.494929 kubelet[2109]: I0905 00:11:01.494901 2109 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:11:01.497360 kubelet[2109]: I0905 00:11:01.497341 2109 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:11:01.497405 kubelet[2109]: I0905 00:11:01.497365 2109 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:11:01.497405 kubelet[2109]: I0905 00:11:01.497386 2109 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:11:01.498505 kubelet[2109]: I0905 00:11:01.498405 2109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:11:01.502825 kubelet[2109]: I0905 00:11:01.500688 2109 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:11:01.502825 kubelet[2109]: I0905 00:11:01.501531 2109 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:11:01.502825 kubelet[2109]: E0905 00:11:01.501628 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:11:01.502825 kubelet[2109]: W0905 00:11:01.501658 2109 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:11:01.503264 kubelet[2109]: E0905 00:11:01.503213 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:11:01.505059 kubelet[2109]: I0905 00:11:01.505033 2109 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:11:01.505210 kubelet[2109]: I0905 00:11:01.505192 2109 server.go:1289] "Started kubelet" Sep 5 00:11:01.505658 kubelet[2109]: I0905 00:11:01.505623 2109 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:11:01.505985 kubelet[2109]: I0905 00:11:01.505946 2109 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:11:01.506258 kubelet[2109]: I0905 00:11:01.506237 2109 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:11:01.508364 kubelet[2109]: I0905 00:11:01.508149 2109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:11:01.508820 kubelet[2109]: I0905 00:11:01.508776 2109 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:11:01.510197 kubelet[2109]: I0905 00:11:01.510177 2109 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:11:01.510304 kubelet[2109]: I0905 00:11:01.510270 2109 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:11:01.510424 kubelet[2109]: E0905 00:11:01.510409 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:01.511146 kubelet[2109]: I0905 00:11:01.511122 2109 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:11:01.511214 kubelet[2109]: I0905 00:11:01.511203 2109 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:11:01.513715 kubelet[2109]: E0905 00:11:01.513676 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Sep 5 00:11:01.514171 kubelet[2109]: I0905 00:11:01.514070 2109 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:11:01.517244 kubelet[2109]: I0905 00:11:01.515555 2109 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:11:01.517244 kubelet[2109]: E0905 00:11:01.516136 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:11:01.517981 kubelet[2109]: E0905 00:11:01.516995 2109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a70f857939f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:11:01.505151903 +0000 UTC m=+1.693344834,LastTimestamp:2025-09-05 00:11:01.505151903 +0000 UTC m=+1.693344834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:11:01.518214 kubelet[2109]: E0905 00:11:01.518193 2109 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:11:01.518429 kubelet[2109]: I0905 00:11:01.518413 2109 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:11:01.523676 kubelet[2109]: E0905 00:11:01.523566 2109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a70f857939f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:11:01.505151903 +0000 UTC m=+1.693344834,LastTimestamp:2025-09-05 00:11:01.505151903 +0000 UTC m=+1.693344834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:11:01.531306 kubelet[2109]: I0905 00:11:01.531271 2109 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:11:01.531306 kubelet[2109]: I0905 00:11:01.531308 2109 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:11:01.531443 kubelet[2109]: I0905 00:11:01.531327 2109 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:11:01.532755 kubelet[2109]: I0905 00:11:01.532726 2109 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:11:01.534463 kubelet[2109]: I0905 00:11:01.534148 2109 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:11:01.534463 kubelet[2109]: I0905 00:11:01.534177 2109 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:11:01.534463 kubelet[2109]: I0905 00:11:01.534212 2109 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:11:01.534463 kubelet[2109]: I0905 00:11:01.534221 2109 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:11:01.534463 kubelet[2109]: E0905 00:11:01.534270 2109 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:11:01.535208 kubelet[2109]: E0905 00:11:01.535175 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:11:01.611201 kubelet[2109]: E0905 00:11:01.611144 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:01.635370 kubelet[2109]: E0905 00:11:01.635342 2109 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:11:01.659006 kubelet[2109]: I0905 00:11:01.658983 2109 policy_none.go:49] "None policy: Start" Sep 5 00:11:01.659087 kubelet[2109]: I0905 00:11:01.659023 2109 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:11:01.659087 kubelet[2109]: I0905 00:11:01.659037 2109 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:11:01.664480 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:11:01.678065 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:11:01.680720 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:11:01.692180 kubelet[2109]: E0905 00:11:01.692130 2109 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:11:01.692595 kubelet[2109]: I0905 00:11:01.692368 2109 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:11:01.692595 kubelet[2109]: I0905 00:11:01.692386 2109 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:11:01.692719 kubelet[2109]: I0905 00:11:01.692679 2109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:11:01.693440 kubelet[2109]: E0905 00:11:01.693410 2109 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:11:01.693491 kubelet[2109]: E0905 00:11:01.693451 2109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:11:01.715231 kubelet[2109]: E0905 00:11:01.715193 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Sep 5 00:11:01.794317 kubelet[2109]: I0905 00:11:01.794208 2109 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:11:01.796102 kubelet[2109]: E0905 00:11:01.796038 2109 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Sep 5 00:11:01.845531 systemd[1]: Created slice kubepods-burstable-pod598b7b1e33ea8127c25a06aa14a40096.slice - libcontainer container kubepods-burstable-pod598b7b1e33ea8127c25a06aa14a40096.slice. Sep 5 00:11:01.862099 kubelet[2109]: E0905 00:11:01.862073 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:01.864963 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:11:01.866349 kubelet[2109]: E0905 00:11:01.866328 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:01.877664 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:11:01.879244 kubelet[2109]: E0905 00:11:01.879075 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:01.914541 kubelet[2109]: I0905 00:11:01.914419 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/598b7b1e33ea8127c25a06aa14a40096-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"598b7b1e33ea8127c25a06aa14a40096\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:01.914541 kubelet[2109]: I0905 00:11:01.914460 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/598b7b1e33ea8127c25a06aa14a40096-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"598b7b1e33ea8127c25a06aa14a40096\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:01.914541 kubelet[2109]: I0905 00:11:01.914480 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/598b7b1e33ea8127c25a06aa14a40096-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"598b7b1e33ea8127c25a06aa14a40096\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:01.914541 kubelet[2109]: I0905 00:11:01.914497 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:01.914725 kubelet[2109]: I0905 00:11:01.914560 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:01.914725 kubelet[2109]: I0905 00:11:01.914601 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:01.914725 kubelet[2109]: I0905 00:11:01.914631 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:01.914725 kubelet[2109]: I0905 00:11:01.914648 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:01.914725 kubelet[2109]: I0905 00:11:01.914663 2109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:11:01.997816 kubelet[2109]: I0905 00:11:01.997785 2109 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:11:01.998184 kubelet[2109]: E0905 00:11:01.998152 2109 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Sep 5 00:11:02.116595 kubelet[2109]: E0905 00:11:02.116461 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Sep 5 00:11:02.162734 kubelet[2109]: E0905 00:11:02.162648 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:02.163294 containerd[1435]: time="2025-09-05T00:11:02.163243394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:598b7b1e33ea8127c25a06aa14a40096,Namespace:kube-system,Attempt:0,}" Sep 5 00:11:02.167767 kubelet[2109]: E0905 00:11:02.167506 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:02.167933 containerd[1435]: time="2025-09-05T00:11:02.167900310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:11:02.179518 kubelet[2109]: E0905 00:11:02.179483 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:02.180072 containerd[1435]: time="2025-09-05T00:11:02.180028042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:11:02.399476 kubelet[2109]: I0905 00:11:02.399359 2109 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:11:02.399896 kubelet[2109]: E0905 00:11:02.399792 2109 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Sep 5 00:11:02.433727 kubelet[2109]: E0905 00:11:02.433682 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:11:02.680682 kubelet[2109]: E0905 00:11:02.680541 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:11:02.773841 kubelet[2109]: E0905 00:11:02.773797 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:11:02.802444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930351361.mount: Deactivated successfully. Sep 5 00:11:02.810845 containerd[1435]: time="2025-09-05T00:11:02.810773789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:11:02.811703 containerd[1435]: time="2025-09-05T00:11:02.811675975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:11:02.812272 containerd[1435]: time="2025-09-05T00:11:02.812241984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:11:02.812953 containerd[1435]: time="2025-09-05T00:11:02.812924568Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:11:02.814780 containerd[1435]: time="2025-09-05T00:11:02.814742846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:11:02.815949 containerd[1435]: time="2025-09-05T00:11:02.815919384Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:11:02.816879 containerd[1435]: time="2025-09-05T00:11:02.816854540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 5 00:11:02.818064 containerd[1435]: time="2025-09-05T00:11:02.818035594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:11:02.819603 containerd[1435]: time="2025-09-05T00:11:02.819576842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 639.461119ms" Sep 5 00:11:02.820347 containerd[1435]: time="2025-09-05T00:11:02.820318013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 652.347527ms" Sep 5 00:11:02.824160 containerd[1435]: time="2025-09-05T00:11:02.824126975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.78455ms" Sep 5 00:11:02.840720 kubelet[2109]: E0905 00:11:02.840680 2109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:11:02.917962 kubelet[2109]: E0905 00:11:02.917704 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Sep 5 00:11:02.925275 containerd[1435]: time="2025-09-05T00:11:02.925114527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:02.925275 containerd[1435]: time="2025-09-05T00:11:02.925168079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:02.925275 containerd[1435]: time="2025-09-05T00:11:02.925177910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:02.925465 containerd[1435]: time="2025-09-05T00:11:02.925264751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:02.926152 containerd[1435]: time="2025-09-05T00:11:02.925965359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:02.926251 containerd[1435]: time="2025-09-05T00:11:02.926140721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:02.926251 containerd[1435]: time="2025-09-05T00:11:02.926167057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:02.926422 containerd[1435]: time="2025-09-05T00:11:02.926269404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:02.926466 containerd[1435]: time="2025-09-05T00:11:02.925736326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:02.926466 containerd[1435]: time="2025-09-05T00:11:02.925791716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:02.926466 containerd[1435]: time="2025-09-05T00:11:02.925803345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:02.926466 containerd[1435]: time="2025-09-05T00:11:02.925890027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:02.945744 systemd[1]: Started cri-containerd-2f6816f3b2ce94c7a19034ec14db72ec59adc436fc6be54a9e16fcecf6d8ef92.scope - libcontainer container 2f6816f3b2ce94c7a19034ec14db72ec59adc436fc6be54a9e16fcecf6d8ef92. Sep 5 00:11:02.951157 systemd[1]: Started cri-containerd-923fc83995d337b67bbbed87b370b89025f59986fb25a7ffd01123c1fcd61c4d.scope - libcontainer container 923fc83995d337b67bbbed87b370b89025f59986fb25a7ffd01123c1fcd61c4d. Sep 5 00:11:02.952494 systemd[1]: Started cri-containerd-9254771c2058febe53a2e06228faa01efd41793a573f5eac3c1d5cbffda65d40.scope - libcontainer container 9254771c2058febe53a2e06228faa01efd41793a573f5eac3c1d5cbffda65d40. Sep 5 00:11:02.983301 containerd[1435]: time="2025-09-05T00:11:02.983251563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:598b7b1e33ea8127c25a06aa14a40096,Namespace:kube-system,Attempt:0,} returns sandbox id \"923fc83995d337b67bbbed87b370b89025f59986fb25a7ffd01123c1fcd61c4d\"" Sep 5 00:11:02.984488 kubelet[2109]: E0905 00:11:02.984440 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:02.986064 containerd[1435]: time="2025-09-05T00:11:02.986002200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f6816f3b2ce94c7a19034ec14db72ec59adc436fc6be54a9e16fcecf6d8ef92\"" Sep 5 00:11:02.986632 kubelet[2109]: E0905 00:11:02.986601 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:02.992899 containerd[1435]: time="2025-09-05T00:11:02.992871798Z" level=info msg="CreateContainer within sandbox \"923fc83995d337b67bbbed87b370b89025f59986fb25a7ffd01123c1fcd61c4d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:11:02.996331 containerd[1435]: time="2025-09-05T00:11:02.996276525Z" level=info msg="CreateContainer within sandbox \"2f6816f3b2ce94c7a19034ec14db72ec59adc436fc6be54a9e16fcecf6d8ef92\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:11:02.996536 containerd[1435]: time="2025-09-05T00:11:02.996310454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"9254771c2058febe53a2e06228faa01efd41793a573f5eac3c1d5cbffda65d40\"" Sep 5 00:11:02.997014 kubelet[2109]: E0905 00:11:02.996994 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:03.003080 containerd[1435]: time="2025-09-05T00:11:03.003046611Z" level=info msg="CreateContainer within sandbox \"9254771c2058febe53a2e06228faa01efd41793a573f5eac3c1d5cbffda65d40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:11:03.016517 containerd[1435]: time="2025-09-05T00:11:03.016464014Z" level=info msg="CreateContainer within sandbox \"923fc83995d337b67bbbed87b370b89025f59986fb25a7ffd01123c1fcd61c4d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"22530f603380a73a8267d3ce62cb9eabb44f40182125e1dcde9e491ffa0ce790\"" Sep 5 00:11:03.017180 containerd[1435]: time="2025-09-05T00:11:03.017141918Z" level=info msg="StartContainer for \"22530f603380a73a8267d3ce62cb9eabb44f40182125e1dcde9e491ffa0ce790\"" Sep 5 00:11:03.023968 containerd[1435]: time="2025-09-05T00:11:03.023929477Z" level=info msg="CreateContainer within sandbox \"2f6816f3b2ce94c7a19034ec14db72ec59adc436fc6be54a9e16fcecf6d8ef92\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d6805cfd7f9bcc88202ffe66ccb888cda43bb3ba36f414fae04ad4f75861c54\"" Sep 5 00:11:03.024787 containerd[1435]: time="2025-09-05T00:11:03.024760821Z" level=info msg="StartContainer for \"0d6805cfd7f9bcc88202ffe66ccb888cda43bb3ba36f414fae04ad4f75861c54\"" Sep 5 00:11:03.028033 containerd[1435]: time="2025-09-05T00:11:03.026799411Z" level=info msg="CreateContainer within sandbox \"9254771c2058febe53a2e06228faa01efd41793a573f5eac3c1d5cbffda65d40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf89192e619f2490f5f151d7f67d9edc3e7e04f8aa37aa55e66f2af41ea6533e\"" Sep 5 00:11:03.028033 containerd[1435]: time="2025-09-05T00:11:03.027333829Z" level=info msg="StartContainer for \"bf89192e619f2490f5f151d7f67d9edc3e7e04f8aa37aa55e66f2af41ea6533e\"" Sep 5 00:11:03.045493 systemd[1]: Started cri-containerd-22530f603380a73a8267d3ce62cb9eabb44f40182125e1dcde9e491ffa0ce790.scope - libcontainer container 22530f603380a73a8267d3ce62cb9eabb44f40182125e1dcde9e491ffa0ce790. Sep 5 00:11:03.063439 systemd[1]: Started cri-containerd-0d6805cfd7f9bcc88202ffe66ccb888cda43bb3ba36f414fae04ad4f75861c54.scope - libcontainer container 0d6805cfd7f9bcc88202ffe66ccb888cda43bb3ba36f414fae04ad4f75861c54. Sep 5 00:11:03.064667 systemd[1]: Started cri-containerd-bf89192e619f2490f5f151d7f67d9edc3e7e04f8aa37aa55e66f2af41ea6533e.scope - libcontainer container bf89192e619f2490f5f151d7f67d9edc3e7e04f8aa37aa55e66f2af41ea6533e. Sep 5 00:11:03.097046 containerd[1435]: time="2025-09-05T00:11:03.096924824Z" level=info msg="StartContainer for \"22530f603380a73a8267d3ce62cb9eabb44f40182125e1dcde9e491ffa0ce790\" returns successfully" Sep 5 00:11:03.106538 containerd[1435]: time="2025-09-05T00:11:03.106482555Z" level=info msg="StartContainer for \"bf89192e619f2490f5f151d7f67d9edc3e7e04f8aa37aa55e66f2af41ea6533e\" returns successfully" Sep 5 00:11:03.109327 containerd[1435]: time="2025-09-05T00:11:03.109198570Z" level=info msg="StartContainer for \"0d6805cfd7f9bcc88202ffe66ccb888cda43bb3ba36f414fae04ad4f75861c54\" returns successfully" Sep 5 00:11:03.201477 kubelet[2109]: I0905 00:11:03.201365 2109 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:11:03.542808 kubelet[2109]: E0905 00:11:03.542703 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:03.542904 kubelet[2109]: E0905 00:11:03.542831 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:03.544549 kubelet[2109]: E0905 00:11:03.544392 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:03.544549 kubelet[2109]: E0905 00:11:03.544506 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:03.548581 kubelet[2109]: E0905 00:11:03.548557 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:03.548678 kubelet[2109]: E0905 00:11:03.548660 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:04.547760 kubelet[2109]: E0905 00:11:04.547731 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:04.548168 kubelet[2109]: E0905 00:11:04.547787 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:04.548168 kubelet[2109]: E0905 00:11:04.547860 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:04.548168 kubelet[2109]: E0905 00:11:04.547881 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:04.548168 kubelet[2109]: E0905 00:11:04.547911 2109 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:11:04.548168 kubelet[2109]: E0905 00:11:04.548020 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:04.775186 kubelet[2109]: E0905 00:11:04.775147 2109 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:11:04.877704 kubelet[2109]: I0905 00:11:04.877609 2109 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:11:04.877704 kubelet[2109]: E0905 00:11:04.877651 2109 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:11:04.887365 kubelet[2109]: E0905 00:11:04.887338 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:04.987935 kubelet[2109]: E0905 00:11:04.987894 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.088682 kubelet[2109]: E0905 00:11:05.088637 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.189057 kubelet[2109]: E0905 00:11:05.188951 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.289536 kubelet[2109]: E0905 00:11:05.289497 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.390448 kubelet[2109]: E0905 00:11:05.390412 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.491565 kubelet[2109]: E0905 00:11:05.491452 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.591786 kubelet[2109]: E0905 00:11:05.591750 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.692434 kubelet[2109]: E0905 00:11:05.692372 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.792790 kubelet[2109]: E0905 00:11:05.792679 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:05.893449 kubelet[2109]: E0905 00:11:05.893401 2109 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:06.011822 kubelet[2109]: I0905 00:11:06.011451 2109 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:06.021186 kubelet[2109]: I0905 00:11:06.021127 2109 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:06.025155 kubelet[2109]: I0905 00:11:06.025111 2109 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:11:06.503810 kubelet[2109]: I0905 00:11:06.503774 2109 apiserver.go:52] "Watching apiserver" Sep 5 00:11:06.508818 kubelet[2109]: E0905 00:11:06.508781 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:06.509898 kubelet[2109]: E0905 00:11:06.509868 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:06.510011 kubelet[2109]: E0905 00:11:06.509988 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:06.512090 kubelet[2109]: I0905 00:11:06.512065 2109 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:11:06.851958 systemd[1]: Reloading requested from client PID 2398 ('systemctl') (unit session-7.scope)... Sep 5 00:11:06.851978 systemd[1]: Reloading... Sep 5 00:11:06.921341 zram_generator::config[2437]: No configuration found. Sep 5 00:11:07.007023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:11:07.076819 systemd[1]: Reloading finished in 224 ms. Sep 5 00:11:07.109344 kubelet[2109]: I0905 00:11:07.108212 2109 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:11:07.108418 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:11:07.122151 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:11:07.122378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:11:07.122434 systemd[1]: kubelet.service: Consumed 2.037s CPU time, 129.3M memory peak, 0B memory swap peak. Sep 5 00:11:07.130589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:11:07.230395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:11:07.234151 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:11:07.267224 kubelet[2479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:11:07.267224 kubelet[2479]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:11:07.267224 kubelet[2479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:11:07.267559 kubelet[2479]: I0905 00:11:07.267271 2479 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:11:07.273029 kubelet[2479]: I0905 00:11:07.272990 2479 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:11:07.273029 kubelet[2479]: I0905 00:11:07.273021 2479 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:11:07.273213 kubelet[2479]: I0905 00:11:07.273197 2479 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:11:07.274416 kubelet[2479]: I0905 00:11:07.274399 2479 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:11:07.277070 kubelet[2479]: I0905 00:11:07.276664 2479 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:11:07.280811 kubelet[2479]: E0905 00:11:07.280740 2479 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:11:07.280811 kubelet[2479]: I0905 00:11:07.280813 2479 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:11:07.283067 kubelet[2479]: I0905 00:11:07.283050 2479 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:11:07.283250 kubelet[2479]: I0905 00:11:07.283227 2479 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:11:07.283400 kubelet[2479]: I0905 00:11:07.283251 2479 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:11:07.283472 kubelet[2479]: I0905 00:11:07.283412 2479 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:11:07.283472 kubelet[2479]: I0905 00:11:07.283421 2479 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:11:07.283472 kubelet[2479]: I0905 00:11:07.283460 2479 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:11:07.283612 kubelet[2479]: I0905 00:11:07.283599 2479 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:11:07.283642 kubelet[2479]: I0905 00:11:07.283615 2479 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:11:07.283642 kubelet[2479]: I0905 00:11:07.283636 2479 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:11:07.283741 kubelet[2479]: I0905 00:11:07.283648 2479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:11:07.284385 kubelet[2479]: I0905 00:11:07.284353 2479 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:11:07.285064 kubelet[2479]: I0905 00:11:07.285041 2479 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:11:07.286926 kubelet[2479]: I0905 00:11:07.286906 2479 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:11:07.286985 kubelet[2479]: I0905 00:11:07.286944 2479 server.go:1289] "Started kubelet" Sep 5 00:11:07.288181 kubelet[2479]: I0905 00:11:07.287849 2479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:11:07.288181 kubelet[2479]: I0905 00:11:07.288073 2479 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:11:07.288181 kubelet[2479]: I0905 00:11:07.288114 2479 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:11:07.288655 kubelet[2479]: I0905 00:11:07.288620 2479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:11:07.289199 kubelet[2479]: I0905 00:11:07.289179 2479 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:11:07.290191 kubelet[2479]: I0905 00:11:07.290164 2479 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:11:07.293812 kubelet[2479]: E0905 00:11:07.293660 2479 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:11:07.293812 kubelet[2479]: I0905 00:11:07.293696 2479 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:11:07.300304 kubelet[2479]: I0905 00:11:07.295220 2479 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:11:07.300304 kubelet[2479]: I0905 00:11:07.295429 2479 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:11:07.308562 kubelet[2479]: I0905 00:11:07.308420 2479 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:11:07.308562 kubelet[2479]: I0905 00:11:07.308505 2479 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:11:07.310424 kubelet[2479]: I0905 00:11:07.310382 2479 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:11:07.311243 kubelet[2479]: I0905 00:11:07.311215 2479 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:11:07.311243 kubelet[2479]: I0905 00:11:07.311239 2479 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:11:07.311243 kubelet[2479]: I0905 00:11:07.311256 2479 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:11:07.311243 kubelet[2479]: I0905 00:11:07.311265 2479 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:11:07.311539 kubelet[2479]: E0905 00:11:07.311407 2479 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:11:07.314020 kubelet[2479]: I0905 00:11:07.313995 2479 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:11:07.320523 kubelet[2479]: E0905 00:11:07.320499 2479 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:11:07.347636 kubelet[2479]: I0905 00:11:07.347604 2479 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:11:07.347636 kubelet[2479]: I0905 00:11:07.347624 2479 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:11:07.347636 kubelet[2479]: I0905 00:11:07.347643 2479 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:11:07.347809 kubelet[2479]: I0905 00:11:07.347775 2479 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:11:07.347809 kubelet[2479]: I0905 00:11:07.347786 2479 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:11:07.347809 kubelet[2479]: I0905 00:11:07.347801 2479 policy_none.go:49] "None policy: Start" Sep 5 00:11:07.347809 kubelet[2479]: I0905 00:11:07.347810 2479 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:11:07.347890 kubelet[2479]: I0905 00:11:07.347818 2479 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:11:07.347930 kubelet[2479]: I0905 00:11:07.347896 2479 state_mem.go:75] "Updated machine memory state" Sep 5 00:11:07.351086 kubelet[2479]: E0905 00:11:07.351069 2479 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:11:07.351231 kubelet[2479]: I0905 00:11:07.351217 2479 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:11:07.351277 kubelet[2479]: I0905 00:11:07.351235 2479 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:11:07.351470 kubelet[2479]: I0905 00:11:07.351425 2479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:11:07.352406 kubelet[2479]: E0905 00:11:07.352376 2479 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:11:07.412425 kubelet[2479]: I0905 00:11:07.412170 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:11:07.412537 kubelet[2479]: I0905 00:11:07.412516 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:07.412565 kubelet[2479]: I0905 00:11:07.412553 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:07.418807 kubelet[2479]: E0905 00:11:07.418760 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:07.419170 kubelet[2479]: E0905 00:11:07.419147 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:11:07.419323 kubelet[2479]: E0905 00:11:07.419294 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:07.455670 kubelet[2479]: I0905 00:11:07.455539 2479 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:11:07.460631 kubelet[2479]: I0905 00:11:07.460603 2479 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:11:07.460703 kubelet[2479]: I0905 00:11:07.460681 2479 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:11:07.496903 kubelet[2479]: I0905 00:11:07.496864 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/598b7b1e33ea8127c25a06aa14a40096-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"598b7b1e33ea8127c25a06aa14a40096\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:07.496903 kubelet[2479]: I0905 00:11:07.496904 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:07.497049 kubelet[2479]: I0905 00:11:07.496925 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:07.497049 kubelet[2479]: I0905 00:11:07.496941 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:11:07.497049 kubelet[2479]: I0905 00:11:07.497008 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/598b7b1e33ea8127c25a06aa14a40096-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"598b7b1e33ea8127c25a06aa14a40096\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:07.497049 kubelet[2479]: I0905 00:11:07.497039 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/598b7b1e33ea8127c25a06aa14a40096-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"598b7b1e33ea8127c25a06aa14a40096\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:07.497130 kubelet[2479]: I0905 00:11:07.497060 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:07.497130 kubelet[2479]: I0905 00:11:07.497093 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:07.497130 kubelet[2479]: I0905 00:11:07.497114 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:11:07.720174 kubelet[2479]: E0905 00:11:07.719819 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:07.720174 kubelet[2479]: E0905 00:11:07.719904 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:07.720174 kubelet[2479]: E0905 00:11:07.720028 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:08.284537 kubelet[2479]: I0905 00:11:08.284198 2479 apiserver.go:52] "Watching apiserver" Sep 5 00:11:08.297481 kubelet[2479]: I0905 00:11:08.296168 2479 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:11:08.329915 kubelet[2479]: E0905 00:11:08.329211 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:08.329915 kubelet[2479]: I0905 00:11:08.329274 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:11:08.329915 kubelet[2479]: I0905 00:11:08.329731 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:08.335965 kubelet[2479]: E0905 00:11:08.335926 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:11:08.336115 kubelet[2479]: E0905 00:11:08.336095 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:08.336465 kubelet[2479]: E0905 00:11:08.336159 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:11:08.336891 kubelet[2479]: E0905 00:11:08.336874 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:08.381638 kubelet[2479]: I0905 00:11:08.381557 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.381541286 podStartE2EDuration="2.381541286s" podCreationTimestamp="2025-09-05 00:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:08.372891119 +0000 UTC m=+1.135462540" watchObservedRunningTime="2025-09-05 00:11:08.381541286 +0000 UTC m=+1.144112707" Sep 5 00:11:08.390360 kubelet[2479]: I0905 00:11:08.390164 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.390150009 podStartE2EDuration="2.390150009s" podCreationTimestamp="2025-09-05 00:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:08.381882203 +0000 UTC m=+1.144453584" watchObservedRunningTime="2025-09-05 00:11:08.390150009 +0000 UTC m=+1.152721390" Sep 5 00:11:08.390733 kubelet[2479]: I0905 00:11:08.390675 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.390665704 podStartE2EDuration="2.390665704s" podCreationTimestamp="2025-09-05 00:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:08.390658183 +0000 UTC m=+1.153229604" watchObservedRunningTime="2025-09-05 00:11:08.390665704 +0000 UTC m=+1.153237125" Sep 5 00:11:09.331314 kubelet[2479]: E0905 00:11:09.331061 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:09.331314 kubelet[2479]: E0905 00:11:09.331121 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:11.245713 kubelet[2479]: I0905 00:11:11.245674 2479 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:11:11.246062 containerd[1435]: time="2025-09-05T00:11:11.246006959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:11:11.246254 kubelet[2479]: I0905 00:11:11.246187 2479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:11:11.823868 systemd[1]: Created slice kubepods-besteffort-pod1ac22240_7522_48c0_988d_e65e47285410.slice - libcontainer container kubepods-besteffort-pod1ac22240_7522_48c0_988d_e65e47285410.slice. Sep 5 00:11:11.825927 kubelet[2479]: I0905 00:11:11.825851 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ac22240-7522-48c0-988d-e65e47285410-lib-modules\") pod \"kube-proxy-86j6g\" (UID: \"1ac22240-7522-48c0-988d-e65e47285410\") " pod="kube-system/kube-proxy-86j6g" Sep 5 00:11:11.825927 kubelet[2479]: I0905 00:11:11.825897 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ac22240-7522-48c0-988d-e65e47285410-kube-proxy\") pod \"kube-proxy-86j6g\" (UID: \"1ac22240-7522-48c0-988d-e65e47285410\") " pod="kube-system/kube-proxy-86j6g" Sep 5 00:11:11.825927 kubelet[2479]: I0905 00:11:11.825917 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7ls\" (UniqueName: \"kubernetes.io/projected/1ac22240-7522-48c0-988d-e65e47285410-kube-api-access-cs7ls\") pod \"kube-proxy-86j6g\" (UID: \"1ac22240-7522-48c0-988d-e65e47285410\") " pod="kube-system/kube-proxy-86j6g" Sep 5 00:11:11.826057 kubelet[2479]: I0905 00:11:11.825941 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ac22240-7522-48c0-988d-e65e47285410-xtables-lock\") pod \"kube-proxy-86j6g\" (UID: \"1ac22240-7522-48c0-988d-e65e47285410\") " pod="kube-system/kube-proxy-86j6g" Sep 5 00:11:11.934787 kubelet[2479]: E0905 00:11:11.934754 2479 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:11:11.934787 kubelet[2479]: E0905 00:11:11.934785 2479 projected.go:194] Error preparing data for projected volume kube-api-access-cs7ls for pod kube-system/kube-proxy-86j6g: configmap "kube-root-ca.crt" not found Sep 5 00:11:11.934953 kubelet[2479]: E0905 00:11:11.934852 2479 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ac22240-7522-48c0-988d-e65e47285410-kube-api-access-cs7ls podName:1ac22240-7522-48c0-988d-e65e47285410 nodeName:}" failed. No retries permitted until 2025-09-05 00:11:12.434829897 +0000 UTC m=+5.197401318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cs7ls" (UniqueName: "kubernetes.io/projected/1ac22240-7522-48c0-988d-e65e47285410-kube-api-access-cs7ls") pod "kube-proxy-86j6g" (UID: "1ac22240-7522-48c0-988d-e65e47285410") : configmap "kube-root-ca.crt" not found Sep 5 00:11:12.447402 systemd[1]: Created slice kubepods-besteffort-poddb39f836_cca9_4113_97e4_8a4481c4e5c2.slice - libcontainer container kubepods-besteffort-poddb39f836_cca9_4113_97e4_8a4481c4e5c2.slice. Sep 5 00:11:12.530528 kubelet[2479]: I0905 00:11:12.530473 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ltzh\" (UniqueName: \"kubernetes.io/projected/db39f836-cca9-4113-97e4-8a4481c4e5c2-kube-api-access-8ltzh\") pod \"tigera-operator-755d956888-89hkk\" (UID: \"db39f836-cca9-4113-97e4-8a4481c4e5c2\") " pod="tigera-operator/tigera-operator-755d956888-89hkk" Sep 5 00:11:12.530528 kubelet[2479]: I0905 00:11:12.530522 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db39f836-cca9-4113-97e4-8a4481c4e5c2-var-lib-calico\") pod \"tigera-operator-755d956888-89hkk\" (UID: \"db39f836-cca9-4113-97e4-8a4481c4e5c2\") " pod="tigera-operator/tigera-operator-755d956888-89hkk" Sep 5 00:11:12.567074 kubelet[2479]: E0905 00:11:12.566920 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:12.734986 kubelet[2479]: E0905 00:11:12.734816 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:12.735519 containerd[1435]: time="2025-09-05T00:11:12.735470740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86j6g,Uid:1ac22240-7522-48c0-988d-e65e47285410,Namespace:kube-system,Attempt:0,}" Sep 5 00:11:12.750853 containerd[1435]: time="2025-09-05T00:11:12.750792573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-89hkk,Uid:db39f836-cca9-4113-97e4-8a4481c4e5c2,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:11:12.754441 containerd[1435]: time="2025-09-05T00:11:12.754144580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:12.754441 containerd[1435]: time="2025-09-05T00:11:12.754194704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:12.754441 containerd[1435]: time="2025-09-05T00:11:12.754205225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:12.754441 containerd[1435]: time="2025-09-05T00:11:12.754307634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:12.771496 systemd[1]: Started cri-containerd-0204c97643a12d93c222f5b3b9b4c33d8f03c895b36ff527fb9a52bc52545197.scope - libcontainer container 0204c97643a12d93c222f5b3b9b4c33d8f03c895b36ff527fb9a52bc52545197. Sep 5 00:11:12.779954 containerd[1435]: time="2025-09-05T00:11:12.778979908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:12.779954 containerd[1435]: time="2025-09-05T00:11:12.779057155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:12.779954 containerd[1435]: time="2025-09-05T00:11:12.779073156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:12.779954 containerd[1435]: time="2025-09-05T00:11:12.779144362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:12.804548 systemd[1]: Started cri-containerd-f8904c07c50c9fe4e315a4b9dc849b049d16de9ea6edbba29884941347a217d4.scope - libcontainer container f8904c07c50c9fe4e315a4b9dc849b049d16de9ea6edbba29884941347a217d4. Sep 5 00:11:12.805011 containerd[1435]: time="2025-09-05T00:11:12.804978336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86j6g,Uid:1ac22240-7522-48c0-988d-e65e47285410,Namespace:kube-system,Attempt:0,} returns sandbox id \"0204c97643a12d93c222f5b3b9b4c33d8f03c895b36ff527fb9a52bc52545197\"" Sep 5 00:11:12.805984 kubelet[2479]: E0905 00:11:12.805865 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:12.810851 containerd[1435]: time="2025-09-05T00:11:12.810813916Z" level=info msg="CreateContainer within sandbox \"0204c97643a12d93c222f5b3b9b4c33d8f03c895b36ff527fb9a52bc52545197\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:11:12.822660 containerd[1435]: time="2025-09-05T00:11:12.822607406Z" level=info msg="CreateContainer within sandbox \"0204c97643a12d93c222f5b3b9b4c33d8f03c895b36ff527fb9a52bc52545197\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28e24a17c9c5290a7a18e3aa9cbad11ffa1d681f901c45b76f5c92fc9baea828\"" Sep 5 00:11:12.824246 containerd[1435]: time="2025-09-05T00:11:12.823512524Z" level=info msg="StartContainer for \"28e24a17c9c5290a7a18e3aa9cbad11ffa1d681f901c45b76f5c92fc9baea828\"" Sep 5 00:11:12.838730 containerd[1435]: time="2025-09-05T00:11:12.838694145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-89hkk,Uid:db39f836-cca9-4113-97e4-8a4481c4e5c2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f8904c07c50c9fe4e315a4b9dc849b049d16de9ea6edbba29884941347a217d4\"" Sep 5 00:11:12.842413 containerd[1435]: time="2025-09-05T00:11:12.842292813Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:11:12.851454 systemd[1]: Started cri-containerd-28e24a17c9c5290a7a18e3aa9cbad11ffa1d681f901c45b76f5c92fc9baea828.scope - libcontainer container 28e24a17c9c5290a7a18e3aa9cbad11ffa1d681f901c45b76f5c92fc9baea828. Sep 5 00:11:12.874597 containerd[1435]: time="2025-09-05T00:11:12.874550617Z" level=info msg="StartContainer for \"28e24a17c9c5290a7a18e3aa9cbad11ffa1d681f901c45b76f5c92fc9baea828\" returns successfully" Sep 5 00:11:13.337884 kubelet[2479]: E0905 00:11:13.337783 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:13.349392 kubelet[2479]: I0905 00:11:13.349087 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-86j6g" podStartSLOduration=2.349072369 podStartE2EDuration="2.349072369s" podCreationTimestamp="2025-09-05 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:13.348003722 +0000 UTC m=+6.110575143" watchObservedRunningTime="2025-09-05 00:11:13.349072369 +0000 UTC m=+6.111643790" Sep 5 00:11:14.409755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379554456.mount: Deactivated successfully. Sep 5 00:11:15.798163 kubelet[2479]: E0905 00:11:15.797764 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:16.017165 kubelet[2479]: E0905 00:11:16.015395 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:16.342851 kubelet[2479]: E0905 00:11:16.342212 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:16.344684 kubelet[2479]: E0905 00:11:16.344620 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:18.155882 containerd[1435]: time="2025-09-05T00:11:18.155824749Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:18.156455 containerd[1435]: time="2025-09-05T00:11:18.156419986Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 00:11:18.157332 containerd[1435]: time="2025-09-05T00:11:18.157258759Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:18.159681 containerd[1435]: time="2025-09-05T00:11:18.159646867Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:18.160603 containerd[1435]: time="2025-09-05T00:11:18.160561684Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 5.318228627s" Sep 5 00:11:18.160671 containerd[1435]: time="2025-09-05T00:11:18.160604687Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 00:11:18.183174 containerd[1435]: time="2025-09-05T00:11:18.183122328Z" level=info msg="CreateContainer within sandbox \"f8904c07c50c9fe4e315a4b9dc849b049d16de9ea6edbba29884941347a217d4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:11:18.198188 containerd[1435]: time="2025-09-05T00:11:18.198139782Z" level=info msg="CreateContainer within sandbox \"f8904c07c50c9fe4e315a4b9dc849b049d16de9ea6edbba29884941347a217d4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497\"" Sep 5 00:11:18.200194 containerd[1435]: time="2025-09-05T00:11:18.200167069Z" level=info msg="StartContainer for \"af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497\"" Sep 5 00:11:18.225481 systemd[1]: Started cri-containerd-af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497.scope - libcontainer container af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497. Sep 5 00:11:18.248371 containerd[1435]: time="2025-09-05T00:11:18.248332146Z" level=info msg="StartContainer for \"af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497\" returns successfully" Sep 5 00:11:20.107422 update_engine[1422]: I20250905 00:11:20.107343 1422 update_attempter.cc:509] Updating boot flags... Sep 5 00:11:20.155694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2845) Sep 5 00:11:20.207000 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2844) Sep 5 00:11:20.283581 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2844) Sep 5 00:11:20.499463 systemd[1]: cri-containerd-af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497.scope: Deactivated successfully. Sep 5 00:11:20.537007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497-rootfs.mount: Deactivated successfully. Sep 5 00:11:20.600327 containerd[1435]: time="2025-09-05T00:11:20.592896262Z" level=info msg="shim disconnected" id=af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497 namespace=k8s.io Sep 5 00:11:20.600327 containerd[1435]: time="2025-09-05T00:11:20.597764896Z" level=warning msg="cleaning up after shim disconnected" id=af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497 namespace=k8s.io Sep 5 00:11:20.600327 containerd[1435]: time="2025-09-05T00:11:20.597778336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:21.367027 kubelet[2479]: I0905 00:11:21.366987 2479 scope.go:117] "RemoveContainer" containerID="af1e4479b1cb93024097ee7caaa5d6e4a0275d204da41b811549b2ef8bfc7497" Sep 5 00:11:21.372102 containerd[1435]: time="2025-09-05T00:11:21.372062172Z" level=info msg="CreateContainer within sandbox \"f8904c07c50c9fe4e315a4b9dc849b049d16de9ea6edbba29884941347a217d4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 5 00:11:21.385457 containerd[1435]: time="2025-09-05T00:11:21.385413526Z" level=info msg="CreateContainer within sandbox \"f8904c07c50c9fe4e315a4b9dc849b049d16de9ea6edbba29884941347a217d4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0ae2554f3bdcb3ccbd94a0617b2ee1be1f846bb2de6f2412d42123edf80f83cc\"" Sep 5 00:11:21.387222 containerd[1435]: time="2025-09-05T00:11:21.387196941Z" level=info msg="StartContainer for \"0ae2554f3bdcb3ccbd94a0617b2ee1be1f846bb2de6f2412d42123edf80f83cc\"" Sep 5 00:11:21.418499 systemd[1]: Started cri-containerd-0ae2554f3bdcb3ccbd94a0617b2ee1be1f846bb2de6f2412d42123edf80f83cc.scope - libcontainer container 0ae2554f3bdcb3ccbd94a0617b2ee1be1f846bb2de6f2412d42123edf80f83cc. Sep 5 00:11:21.449798 containerd[1435]: time="2025-09-05T00:11:21.449744007Z" level=info msg="StartContainer for \"0ae2554f3bdcb3ccbd94a0617b2ee1be1f846bb2de6f2412d42123edf80f83cc\" returns successfully" Sep 5 00:11:22.375553 kubelet[2479]: I0905 00:11:22.375492 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-89hkk" podStartSLOduration=5.051336081 podStartE2EDuration="10.375476889s" podCreationTimestamp="2025-09-05 00:11:12 +0000 UTC" firstStartedPulling="2025-09-05 00:11:12.84179129 +0000 UTC m=+5.604362711" lastFinishedPulling="2025-09-05 00:11:18.165932098 +0000 UTC m=+10.928503519" observedRunningTime="2025-09-05 00:11:18.374830977 +0000 UTC m=+11.137402398" watchObservedRunningTime="2025-09-05 00:11:22.375476889 +0000 UTC m=+15.138048310" Sep 5 00:11:22.574773 kubelet[2479]: E0905 00:11:22.574657 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:23.365468 kubelet[2479]: E0905 00:11:23.365438 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:23.819149 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 5 00:11:23.821709 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:23.825568 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:45276.service: Deactivated successfully. Sep 5 00:11:23.827517 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:11:23.827820 systemd[1]: session-7.scope: Consumed 6.565s CPU time, 158.1M memory peak, 0B memory swap peak. Sep 5 00:11:23.828348 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:11:23.829309 systemd-logind[1417]: Removed session 7. Sep 5 00:11:30.530797 systemd[1]: Created slice kubepods-besteffort-pod1d5e4026_5d75_4f49_b348_d721479a0a08.slice - libcontainer container kubepods-besteffort-pod1d5e4026_5d75_4f49_b348_d721479a0a08.slice. Sep 5 00:11:30.557779 kubelet[2479]: I0905 00:11:30.557710 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1d5e4026-5d75-4f49-b348-d721479a0a08-typha-certs\") pod \"calico-typha-84c6ffdf69-kqprz\" (UID: \"1d5e4026-5d75-4f49-b348-d721479a0a08\") " pod="calico-system/calico-typha-84c6ffdf69-kqprz" Sep 5 00:11:30.558084 kubelet[2479]: I0905 00:11:30.557808 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5e4026-5d75-4f49-b348-d721479a0a08-tigera-ca-bundle\") pod \"calico-typha-84c6ffdf69-kqprz\" (UID: \"1d5e4026-5d75-4f49-b348-d721479a0a08\") " pod="calico-system/calico-typha-84c6ffdf69-kqprz" Sep 5 00:11:30.558084 kubelet[2479]: I0905 00:11:30.557828 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbw2\" (UniqueName: \"kubernetes.io/projected/1d5e4026-5d75-4f49-b348-d721479a0a08-kube-api-access-pdbw2\") pod \"calico-typha-84c6ffdf69-kqprz\" (UID: \"1d5e4026-5d75-4f49-b348-d721479a0a08\") " pod="calico-system/calico-typha-84c6ffdf69-kqprz" Sep 5 00:11:30.834672 kubelet[2479]: E0905 00:11:30.834483 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:30.835635 containerd[1435]: time="2025-09-05T00:11:30.835463424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c6ffdf69-kqprz,Uid:1d5e4026-5d75-4f49-b348-d721479a0a08,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:30.874545 containerd[1435]: time="2025-09-05T00:11:30.874437967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:30.874545 containerd[1435]: time="2025-09-05T00:11:30.874495289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:30.874545 containerd[1435]: time="2025-09-05T00:11:30.874506129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:30.874712 containerd[1435]: time="2025-09-05T00:11:30.874593452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:30.903888 systemd[1]: Started cri-containerd-b45b547f3ee7c9cac1fd969efa6b4a215007f6d1815d32373d4ecd20baa192dd.scope - libcontainer container b45b547f3ee7c9cac1fd969efa6b4a215007f6d1815d32373d4ecd20baa192dd. Sep 5 00:11:30.919574 systemd[1]: Created slice kubepods-besteffort-podfd082513_777d_4a6a_adb7_f6cde16a44fc.slice - libcontainer container kubepods-besteffort-podfd082513_777d_4a6a_adb7_f6cde16a44fc.slice. Sep 5 00:11:30.949698 containerd[1435]: time="2025-09-05T00:11:30.949561712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c6ffdf69-kqprz,Uid:1d5e4026-5d75-4f49-b348-d721479a0a08,Namespace:calico-system,Attempt:0,} returns sandbox id \"b45b547f3ee7c9cac1fd969efa6b4a215007f6d1815d32373d4ecd20baa192dd\"" Sep 5 00:11:30.950525 kubelet[2479]: E0905 00:11:30.950491 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:30.952452 containerd[1435]: time="2025-09-05T00:11:30.952415773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:11:30.961038 kubelet[2479]: I0905 00:11:30.961000 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-policysync\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964596 kubelet[2479]: I0905 00:11:30.964337 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-cni-log-dir\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964596 kubelet[2479]: I0905 00:11:30.964410 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-cni-net-dir\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964596 kubelet[2479]: I0905 00:11:30.964436 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-flexvol-driver-host\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964596 kubelet[2479]: I0905 00:11:30.964456 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrgn\" (UniqueName: \"kubernetes.io/projected/fd082513-777d-4a6a-adb7-f6cde16a44fc-kube-api-access-vdrgn\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964596 kubelet[2479]: I0905 00:11:30.964491 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-cni-bin-dir\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964997 kubelet[2479]: I0905 00:11:30.964509 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd082513-777d-4a6a-adb7-f6cde16a44fc-tigera-ca-bundle\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964997 kubelet[2479]: I0905 00:11:30.964580 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-var-lib-calico\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964997 kubelet[2479]: I0905 00:11:30.964635 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fd082513-777d-4a6a-adb7-f6cde16a44fc-node-certs\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964997 kubelet[2479]: I0905 00:11:30.964652 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-var-run-calico\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.964997 kubelet[2479]: I0905 00:11:30.964672 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-xtables-lock\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:30.965249 kubelet[2479]: I0905 00:11:30.964690 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd082513-777d-4a6a-adb7-f6cde16a44fc-lib-modules\") pod \"calico-node-kwvbm\" (UID: \"fd082513-777d-4a6a-adb7-f6cde16a44fc\") " pod="calico-system/calico-node-kwvbm" Sep 5 00:11:31.073712 kubelet[2479]: E0905 00:11:31.073572 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.073712 kubelet[2479]: W0905 00:11:31.073607 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.078173 kubelet[2479]: E0905 00:11:31.078100 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.080651 kubelet[2479]: E0905 00:11:31.080619 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.080797 kubelet[2479]: W0905 00:11:31.080779 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.080863 kubelet[2479]: E0905 00:11:31.080851 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.197449 kubelet[2479]: E0905 00:11:31.196975 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7qbm" podUID="9349fa40-4aa1-44d3-a3a7-8c6748ecbd04" Sep 5 00:11:31.224332 containerd[1435]: time="2025-09-05T00:11:31.224249541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kwvbm,Uid:fd082513-777d-4a6a-adb7-f6cde16a44fc,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:31.247440 containerd[1435]: time="2025-09-05T00:11:31.247310367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:31.247440 containerd[1435]: time="2025-09-05T00:11:31.247381729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:31.247440 containerd[1435]: time="2025-09-05T00:11:31.247392969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:31.247621 containerd[1435]: time="2025-09-05T00:11:31.247506653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:31.252295 kubelet[2479]: E0905 00:11:31.252260 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.252593 kubelet[2479]: W0905 00:11:31.252435 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.252593 kubelet[2479]: E0905 00:11:31.252464 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.252796 kubelet[2479]: E0905 00:11:31.252720 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.263543 kubelet[2479]: W0905 00:11:31.252732 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.263543 kubelet[2479]: E0905 00:11:31.263347 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.264629 kubelet[2479]: E0905 00:11:31.264611 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.264629 kubelet[2479]: W0905 00:11:31.264628 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.264735 kubelet[2479]: E0905 00:11:31.264647 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.264830 kubelet[2479]: E0905 00:11:31.264791 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.264830 kubelet[2479]: W0905 00:11:31.264807 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.264830 kubelet[2479]: E0905 00:11:31.264815 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.266159 kubelet[2479]: E0905 00:11:31.264961 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.266159 kubelet[2479]: W0905 00:11:31.264976 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.266159 kubelet[2479]: E0905 00:11:31.264985 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.266549 kubelet[2479]: E0905 00:11:31.266490 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.266549 kubelet[2479]: W0905 00:11:31.266506 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.266549 kubelet[2479]: E0905 00:11:31.266518 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.267042 kubelet[2479]: E0905 00:11:31.266722 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.267042 kubelet[2479]: W0905 00:11:31.266729 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.267042 kubelet[2479]: E0905 00:11:31.266736 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.267042 kubelet[2479]: E0905 00:11:31.266884 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.267042 kubelet[2479]: W0905 00:11:31.266892 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.267042 kubelet[2479]: E0905 00:11:31.266899 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.267649 kubelet[2479]: E0905 00:11:31.267354 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.267649 kubelet[2479]: W0905 00:11:31.267363 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.267649 kubelet[2479]: E0905 00:11:31.267373 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.268213 kubelet[2479]: E0905 00:11:31.268011 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.268213 kubelet[2479]: W0905 00:11:31.268030 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.268213 kubelet[2479]: E0905 00:11:31.268048 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.268213 kubelet[2479]: E0905 00:11:31.268207 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.268213 kubelet[2479]: W0905 00:11:31.268216 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.268538 kubelet[2479]: E0905 00:11:31.268224 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.268538 kubelet[2479]: E0905 00:11:31.268416 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.268538 kubelet[2479]: W0905 00:11:31.268425 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.268538 kubelet[2479]: E0905 00:11:31.268434 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.268749 kubelet[2479]: E0905 00:11:31.268705 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.268749 kubelet[2479]: W0905 00:11:31.268721 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.268749 kubelet[2479]: E0905 00:11:31.268731 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.268889 kubelet[2479]: E0905 00:11:31.268877 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.268889 kubelet[2479]: W0905 00:11:31.268888 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.268953 kubelet[2479]: E0905 00:11:31.268896 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.269369 kubelet[2479]: E0905 00:11:31.269312 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.269369 kubelet[2479]: W0905 00:11:31.269328 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.269369 kubelet[2479]: E0905 00:11:31.269342 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.269739 kubelet[2479]: E0905 00:11:31.269551 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.269739 kubelet[2479]: W0905 00:11:31.269563 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.269739 kubelet[2479]: E0905 00:11:31.269572 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.269739 kubelet[2479]: E0905 00:11:31.269870 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.269739 kubelet[2479]: W0905 00:11:31.269877 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.269739 kubelet[2479]: E0905 00:11:31.269885 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.269739 kubelet[2479]: E0905 00:11:31.270032 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.269739 kubelet[2479]: W0905 00:11:31.270040 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.269739 kubelet[2479]: E0905 00:11:31.270048 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.269739 kubelet[2479]: E0905 00:11:31.270185 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.273626 kubelet[2479]: W0905 00:11:31.270192 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.273626 kubelet[2479]: E0905 00:11:31.270199 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.273626 kubelet[2479]: E0905 00:11:31.270768 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.273626 kubelet[2479]: W0905 00:11:31.270779 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.273626 kubelet[2479]: E0905 00:11:31.270788 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.273626 kubelet[2479]: E0905 00:11:31.271498 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.273626 kubelet[2479]: W0905 00:11:31.271509 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.273626 kubelet[2479]: E0905 00:11:31.271520 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.273626 kubelet[2479]: I0905 00:11:31.271541 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9349fa40-4aa1-44d3-a3a7-8c6748ecbd04-kubelet-dir\") pod \"csi-node-driver-x7qbm\" (UID: \"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04\") " pod="calico-system/csi-node-driver-x7qbm" Sep 5 00:11:31.273816 kubelet[2479]: E0905 00:11:31.273408 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.273816 kubelet[2479]: W0905 00:11:31.273422 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.273816 kubelet[2479]: E0905 00:11:31.273434 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.273816 kubelet[2479]: I0905 00:11:31.273455 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9349fa40-4aa1-44d3-a3a7-8c6748ecbd04-registration-dir\") pod \"csi-node-driver-x7qbm\" (UID: \"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04\") " pod="calico-system/csi-node-driver-x7qbm" Sep 5 00:11:31.273816 kubelet[2479]: E0905 00:11:31.273722 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.273816 kubelet[2479]: W0905 00:11:31.273731 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.273816 kubelet[2479]: E0905 00:11:31.273741 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.273954 kubelet[2479]: I0905 00:11:31.273846 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9349fa40-4aa1-44d3-a3a7-8c6748ecbd04-socket-dir\") pod \"csi-node-driver-x7qbm\" (UID: \"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04\") " pod="calico-system/csi-node-driver-x7qbm" Sep 5 00:11:31.273954 kubelet[2479]: E0905 00:11:31.273923 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.273954 kubelet[2479]: W0905 00:11:31.273929 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.273954 kubelet[2479]: E0905 00:11:31.273937 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.274302 kubelet[2479]: E0905 00:11:31.274093 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.274302 kubelet[2479]: W0905 00:11:31.274104 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.274302 kubelet[2479]: E0905 00:11:31.274112 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.275218 kubelet[2479]: E0905 00:11:31.274374 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.275218 kubelet[2479]: W0905 00:11:31.274385 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.275218 kubelet[2479]: E0905 00:11:31.274394 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.275218 kubelet[2479]: I0905 00:11:31.274417 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9349fa40-4aa1-44d3-a3a7-8c6748ecbd04-varrun\") pod \"csi-node-driver-x7qbm\" (UID: \"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04\") " pod="calico-system/csi-node-driver-x7qbm" Sep 5 00:11:31.275218 kubelet[2479]: E0905 00:11:31.274787 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.275218 kubelet[2479]: W0905 00:11:31.274990 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.275218 kubelet[2479]: E0905 00:11:31.275015 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.276537 systemd[1]: Started cri-containerd-d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079.scope - libcontainer container d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079. Sep 5 00:11:31.279304 kubelet[2479]: E0905 00:11:31.278546 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.279416 kubelet[2479]: W0905 00:11:31.279394 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.279639 kubelet[2479]: E0905 00:11:31.279472 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.280635 kubelet[2479]: E0905 00:11:31.280611 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.280971 kubelet[2479]: W0905 00:11:31.280945 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.281298 kubelet[2479]: E0905 00:11:31.281105 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.282983 kubelet[2479]: E0905 00:11:31.282692 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.283174 kubelet[2479]: W0905 00:11:31.283097 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.283302 kubelet[2479]: E0905 00:11:31.283132 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.283746 kubelet[2479]: E0905 00:11:31.283703 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.283746 kubelet[2479]: W0905 00:11:31.283726 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.283746 kubelet[2479]: E0905 00:11:31.283749 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.283936 kubelet[2479]: I0905 00:11:31.283794 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkqr\" (UniqueName: \"kubernetes.io/projected/9349fa40-4aa1-44d3-a3a7-8c6748ecbd04-kube-api-access-bnkqr\") pod \"csi-node-driver-x7qbm\" (UID: \"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04\") " pod="calico-system/csi-node-driver-x7qbm" Sep 5 00:11:31.284020 kubelet[2479]: E0905 00:11:31.283989 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.284020 kubelet[2479]: W0905 00:11:31.284000 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.284020 kubelet[2479]: E0905 00:11:31.284010 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.284237 kubelet[2479]: E0905 00:11:31.284215 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.284237 kubelet[2479]: W0905 00:11:31.284230 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.284346 kubelet[2479]: E0905 00:11:31.284239 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.284637 kubelet[2479]: E0905 00:11:31.284621 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.284637 kubelet[2479]: W0905 00:11:31.284635 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.285171 kubelet[2479]: E0905 00:11:31.284645 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.285171 kubelet[2479]: E0905 00:11:31.284799 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.285171 kubelet[2479]: W0905 00:11:31.284807 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.285171 kubelet[2479]: E0905 00:11:31.284815 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.320826 containerd[1435]: time="2025-09-05T00:11:31.320720226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kwvbm,Uid:fd082513-777d-4a6a-adb7-f6cde16a44fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079\"" Sep 5 00:11:31.385225 kubelet[2479]: E0905 00:11:31.385196 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.385225 kubelet[2479]: W0905 00:11:31.385217 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.385453 kubelet[2479]: E0905 00:11:31.385243 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.385527 kubelet[2479]: E0905 00:11:31.385516 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.385527 kubelet[2479]: W0905 00:11:31.385527 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.385574 kubelet[2479]: E0905 00:11:31.385537 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.386086 kubelet[2479]: E0905 00:11:31.386063 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.386144 kubelet[2479]: W0905 00:11:31.386087 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.386144 kubelet[2479]: E0905 00:11:31.386102 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.386315 kubelet[2479]: E0905 00:11:31.386301 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.386315 kubelet[2479]: W0905 00:11:31.386316 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.386373 kubelet[2479]: E0905 00:11:31.386326 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.387098 kubelet[2479]: E0905 00:11:31.387076 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.387098 kubelet[2479]: W0905 00:11:31.387093 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.387215 kubelet[2479]: E0905 00:11:31.387107 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.387396 kubelet[2479]: E0905 00:11:31.387378 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.387396 kubelet[2479]: W0905 00:11:31.387394 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.387451 kubelet[2479]: E0905 00:11:31.387406 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.387659 kubelet[2479]: E0905 00:11:31.387634 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.387659 kubelet[2479]: W0905 00:11:31.387648 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.387659 kubelet[2479]: E0905 00:11:31.387659 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.387820 kubelet[2479]: E0905 00:11:31.387810 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.387820 kubelet[2479]: W0905 00:11:31.387820 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.387874 kubelet[2479]: E0905 00:11:31.387828 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.387970 kubelet[2479]: E0905 00:11:31.387956 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.387970 kubelet[2479]: W0905 00:11:31.387964 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.387970 kubelet[2479]: E0905 00:11:31.387972 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.388309 kubelet[2479]: E0905 00:11:31.388269 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.388309 kubelet[2479]: W0905 00:11:31.388296 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.388309 kubelet[2479]: E0905 00:11:31.388307 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.388512 kubelet[2479]: E0905 00:11:31.388500 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.388512 kubelet[2479]: W0905 00:11:31.388512 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.388624 kubelet[2479]: E0905 00:11:31.388523 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.388697 kubelet[2479]: E0905 00:11:31.388680 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.388697 kubelet[2479]: W0905 00:11:31.388692 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.388750 kubelet[2479]: E0905 00:11:31.388699 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.388875 kubelet[2479]: E0905 00:11:31.388864 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.388875 kubelet[2479]: W0905 00:11:31.388875 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.388933 kubelet[2479]: E0905 00:11:31.388884 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.389062 kubelet[2479]: E0905 00:11:31.389050 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.389062 kubelet[2479]: W0905 00:11:31.389060 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.389110 kubelet[2479]: E0905 00:11:31.389068 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.389248 kubelet[2479]: E0905 00:11:31.389236 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.389248 kubelet[2479]: W0905 00:11:31.389247 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.389359 kubelet[2479]: E0905 00:11:31.389257 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.389447 kubelet[2479]: E0905 00:11:31.389434 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.389492 kubelet[2479]: W0905 00:11:31.389446 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.389492 kubelet[2479]: E0905 00:11:31.389488 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.389655 kubelet[2479]: E0905 00:11:31.389643 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.389655 kubelet[2479]: W0905 00:11:31.389653 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.389718 kubelet[2479]: E0905 00:11:31.389661 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.389841 kubelet[2479]: E0905 00:11:31.389830 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.389841 kubelet[2479]: W0905 00:11:31.389840 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.389892 kubelet[2479]: E0905 00:11:31.389851 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.390063 kubelet[2479]: E0905 00:11:31.390042 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.390063 kubelet[2479]: W0905 00:11:31.390059 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.390162 kubelet[2479]: E0905 00:11:31.390069 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.390242 kubelet[2479]: E0905 00:11:31.390230 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.390242 kubelet[2479]: W0905 00:11:31.390240 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.390311 kubelet[2479]: E0905 00:11:31.390249 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.390438 kubelet[2479]: E0905 00:11:31.390425 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.390478 kubelet[2479]: W0905 00:11:31.390438 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.390478 kubelet[2479]: E0905 00:11:31.390448 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.390860 kubelet[2479]: E0905 00:11:31.390839 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.390860 kubelet[2479]: W0905 00:11:31.390859 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.390927 kubelet[2479]: E0905 00:11:31.390875 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.391228 kubelet[2479]: E0905 00:11:31.391211 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.391228 kubelet[2479]: W0905 00:11:31.391227 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.391326 kubelet[2479]: E0905 00:11:31.391245 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.391513 kubelet[2479]: E0905 00:11:31.391490 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.391513 kubelet[2479]: W0905 00:11:31.391499 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.391513 kubelet[2479]: E0905 00:11:31.391509 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.391958 kubelet[2479]: E0905 00:11:31.391927 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.391958 kubelet[2479]: W0905 00:11:31.391940 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.391958 kubelet[2479]: E0905 00:11:31.391952 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.403637 kubelet[2479]: E0905 00:11:31.403600 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:31.403637 kubelet[2479]: W0905 00:11:31.403620 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:31.403637 kubelet[2479]: E0905 00:11:31.403640 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:31.893299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999255424.mount: Deactivated successfully. Sep 5 00:11:32.730232 containerd[1435]: time="2025-09-05T00:11:32.730182407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:32.731129 containerd[1435]: time="2025-09-05T00:11:32.730785427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 00:11:32.732218 containerd[1435]: time="2025-09-05T00:11:32.732170712Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:32.734439 containerd[1435]: time="2025-09-05T00:11:32.734382025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:32.735126 containerd[1435]: time="2025-09-05T00:11:32.735092568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.782630433s" Sep 5 00:11:32.735307 containerd[1435]: time="2025-09-05T00:11:32.735223492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 00:11:32.736296 containerd[1435]: time="2025-09-05T00:11:32.736245406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:11:32.749215 containerd[1435]: time="2025-09-05T00:11:32.749162268Z" level=info msg="CreateContainer within sandbox \"b45b547f3ee7c9cac1fd969efa6b4a215007f6d1815d32373d4ecd20baa192dd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:11:32.761327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218185066.mount: Deactivated successfully. Sep 5 00:11:32.766100 containerd[1435]: time="2025-09-05T00:11:32.766036820Z" level=info msg="CreateContainer within sandbox \"b45b547f3ee7c9cac1fd969efa6b4a215007f6d1815d32373d4ecd20baa192dd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"445fee110ab25bc4c01b90336450b3551f75cfc87bc555102719668fdbb96ab8\"" Sep 5 00:11:32.766657 containerd[1435]: time="2025-09-05T00:11:32.766529636Z" level=info msg="StartContainer for \"445fee110ab25bc4c01b90336450b3551f75cfc87bc555102719668fdbb96ab8\"" Sep 5 00:11:32.796503 systemd[1]: Started cri-containerd-445fee110ab25bc4c01b90336450b3551f75cfc87bc555102719668fdbb96ab8.scope - libcontainer container 445fee110ab25bc4c01b90336450b3551f75cfc87bc555102719668fdbb96ab8. Sep 5 00:11:32.831856 containerd[1435]: time="2025-09-05T00:11:32.831746890Z" level=info msg="StartContainer for \"445fee110ab25bc4c01b90336450b3551f75cfc87bc555102719668fdbb96ab8\" returns successfully" Sep 5 00:11:33.312185 kubelet[2479]: E0905 00:11:33.311824 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7qbm" podUID="9349fa40-4aa1-44d3-a3a7-8c6748ecbd04" Sep 5 00:11:33.403517 kubelet[2479]: E0905 00:11:33.403082 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:33.482905 kubelet[2479]: E0905 00:11:33.482872 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.482905 kubelet[2479]: W0905 00:11:33.482901 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.483085 kubelet[2479]: E0905 00:11:33.482923 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.483146 kubelet[2479]: E0905 00:11:33.483134 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.483146 kubelet[2479]: W0905 00:11:33.483145 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.483210 kubelet[2479]: E0905 00:11:33.483154 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.483419 kubelet[2479]: E0905 00:11:33.483406 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.483419 kubelet[2479]: W0905 00:11:33.483418 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.483497 kubelet[2479]: E0905 00:11:33.483430 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.483615 kubelet[2479]: E0905 00:11:33.483605 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.483645 kubelet[2479]: W0905 00:11:33.483615 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.483645 kubelet[2479]: E0905 00:11:33.483622 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.483819 kubelet[2479]: E0905 00:11:33.483809 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.483819 kubelet[2479]: W0905 00:11:33.483819 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.483879 kubelet[2479]: E0905 00:11:33.483826 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.483975 kubelet[2479]: E0905 00:11:33.483966 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.483975 kubelet[2479]: W0905 00:11:33.483975 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.484034 kubelet[2479]: E0905 00:11:33.483982 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.484177 kubelet[2479]: E0905 00:11:33.484165 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.484225 kubelet[2479]: W0905 00:11:33.484177 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.484225 kubelet[2479]: E0905 00:11:33.484187 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.484373 kubelet[2479]: E0905 00:11:33.484361 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.484373 kubelet[2479]: W0905 00:11:33.484373 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.484434 kubelet[2479]: E0905 00:11:33.484381 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.484544 kubelet[2479]: E0905 00:11:33.484530 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.484544 kubelet[2479]: W0905 00:11:33.484544 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.484601 kubelet[2479]: E0905 00:11:33.484552 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.484693 kubelet[2479]: E0905 00:11:33.484681 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.484725 kubelet[2479]: W0905 00:11:33.484695 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.484725 kubelet[2479]: E0905 00:11:33.484703 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.484838 kubelet[2479]: E0905 00:11:33.484824 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.484838 kubelet[2479]: W0905 00:11:33.484837 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.484891 kubelet[2479]: E0905 00:11:33.484844 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.484982 kubelet[2479]: E0905 00:11:33.484973 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.485014 kubelet[2479]: W0905 00:11:33.484986 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.485014 kubelet[2479]: E0905 00:11:33.484994 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.485138 kubelet[2479]: E0905 00:11:33.485127 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.485167 kubelet[2479]: W0905 00:11:33.485140 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.485167 kubelet[2479]: E0905 00:11:33.485148 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.485317 kubelet[2479]: E0905 00:11:33.485307 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.485317 kubelet[2479]: W0905 00:11:33.485316 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.485381 kubelet[2479]: E0905 00:11:33.485324 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.485457 kubelet[2479]: E0905 00:11:33.485447 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.485487 kubelet[2479]: W0905 00:11:33.485461 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.485487 kubelet[2479]: E0905 00:11:33.485468 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.506127 kubelet[2479]: E0905 00:11:33.506012 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.506127 kubelet[2479]: W0905 00:11:33.506035 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.506127 kubelet[2479]: E0905 00:11:33.506054 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.506575 kubelet[2479]: E0905 00:11:33.506499 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.506575 kubelet[2479]: W0905 00:11:33.506513 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.506575 kubelet[2479]: E0905 00:11:33.506524 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.507125 kubelet[2479]: E0905 00:11:33.507089 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.507125 kubelet[2479]: W0905 00:11:33.507110 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.507125 kubelet[2479]: E0905 00:11:33.507124 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.507657 kubelet[2479]: E0905 00:11:33.507597 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.507657 kubelet[2479]: W0905 00:11:33.507641 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.507657 kubelet[2479]: E0905 00:11:33.507653 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.507993 kubelet[2479]: E0905 00:11:33.507972 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.508036 kubelet[2479]: W0905 00:11:33.507988 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.508036 kubelet[2479]: E0905 00:11:33.508020 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.508250 kubelet[2479]: E0905 00:11:33.508235 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.508250 kubelet[2479]: W0905 00:11:33.508246 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.508324 kubelet[2479]: E0905 00:11:33.508256 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.508472 kubelet[2479]: E0905 00:11:33.508458 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.508472 kubelet[2479]: W0905 00:11:33.508471 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.508542 kubelet[2479]: E0905 00:11:33.508481 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.508739 kubelet[2479]: E0905 00:11:33.508723 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.508769 kubelet[2479]: W0905 00:11:33.508739 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.508769 kubelet[2479]: E0905 00:11:33.508750 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.509244 kubelet[2479]: E0905 00:11:33.509225 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.509244 kubelet[2479]: W0905 00:11:33.509241 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.509320 kubelet[2479]: E0905 00:11:33.509251 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.509482 kubelet[2479]: E0905 00:11:33.509467 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.509482 kubelet[2479]: W0905 00:11:33.509479 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.509543 kubelet[2479]: E0905 00:11:33.509488 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.509686 kubelet[2479]: E0905 00:11:33.509673 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.509714 kubelet[2479]: W0905 00:11:33.509685 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.509714 kubelet[2479]: E0905 00:11:33.509693 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.509864 kubelet[2479]: E0905 00:11:33.509854 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.509889 kubelet[2479]: W0905 00:11:33.509864 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.509889 kubelet[2479]: E0905 00:11:33.509873 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.510018 kubelet[2479]: E0905 00:11:33.510008 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.510042 kubelet[2479]: W0905 00:11:33.510018 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.510042 kubelet[2479]: E0905 00:11:33.510025 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.510221 kubelet[2479]: E0905 00:11:33.510208 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.510250 kubelet[2479]: W0905 00:11:33.510221 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.510250 kubelet[2479]: E0905 00:11:33.510232 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.510604 kubelet[2479]: E0905 00:11:33.510589 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.510627 kubelet[2479]: W0905 00:11:33.510604 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.510627 kubelet[2479]: E0905 00:11:33.510616 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.510818 kubelet[2479]: E0905 00:11:33.510804 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.510818 kubelet[2479]: W0905 00:11:33.510816 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.510876 kubelet[2479]: E0905 00:11:33.510824 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.511163 kubelet[2479]: E0905 00:11:33.511149 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.511192 kubelet[2479]: W0905 00:11:33.511163 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.511192 kubelet[2479]: E0905 00:11:33.511172 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:33.511388 kubelet[2479]: E0905 00:11:33.511375 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:11:33.511415 kubelet[2479]: W0905 00:11:33.511388 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:11:33.511415 kubelet[2479]: E0905 00:11:33.511397 2479 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:11:34.029336 containerd[1435]: time="2025-09-05T00:11:34.028987413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:34.029735 containerd[1435]: time="2025-09-05T00:11:34.029542749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 00:11:34.030370 containerd[1435]: time="2025-09-05T00:11:34.030332653Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:34.032443 containerd[1435]: time="2025-09-05T00:11:34.032400996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:34.033661 containerd[1435]: time="2025-09-05T00:11:34.033321344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.297040456s" Sep 5 00:11:34.033661 containerd[1435]: time="2025-09-05T00:11:34.033357425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 00:11:34.037219 containerd[1435]: time="2025-09-05T00:11:34.037165580Z" level=info msg="CreateContainer within sandbox \"d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:11:34.048673 containerd[1435]: time="2025-09-05T00:11:34.048543125Z" level=info msg="CreateContainer within sandbox \"d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d\"" Sep 5 00:11:34.049970 containerd[1435]: time="2025-09-05T00:11:34.049011659Z" level=info msg="StartContainer for \"1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d\"" Sep 5 00:11:34.078513 systemd[1]: Started cri-containerd-1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d.scope - libcontainer container 1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d. Sep 5 00:11:34.109505 containerd[1435]: time="2025-09-05T00:11:34.109447249Z" level=info msg="StartContainer for \"1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d\" returns successfully" Sep 5 00:11:34.120884 systemd[1]: cri-containerd-1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d.scope: Deactivated successfully. Sep 5 00:11:34.157460 containerd[1435]: time="2025-09-05T00:11:34.157402101Z" level=info msg="shim disconnected" id=1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d namespace=k8s.io Sep 5 00:11:34.157902 containerd[1435]: time="2025-09-05T00:11:34.157672309Z" level=warning msg="cleaning up after shim disconnected" id=1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d namespace=k8s.io Sep 5 00:11:34.157902 containerd[1435]: time="2025-09-05T00:11:34.157689430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:34.405660 kubelet[2479]: I0905 00:11:34.405603 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:34.406575 kubelet[2479]: E0905 00:11:34.405913 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:34.409597 containerd[1435]: time="2025-09-05T00:11:34.409561016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:11:34.426419 kubelet[2479]: I0905 00:11:34.426348 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84c6ffdf69-kqprz" podStartSLOduration=2.6421792 podStartE2EDuration="4.426331804s" podCreationTimestamp="2025-09-05 00:11:30 +0000 UTC" firstStartedPulling="2025-09-05 00:11:30.951914676 +0000 UTC m=+23.714486057" lastFinishedPulling="2025-09-05 00:11:32.73606724 +0000 UTC m=+25.498638661" observedRunningTime="2025-09-05 00:11:33.419084659 +0000 UTC m=+26.181656080" watchObservedRunningTime="2025-09-05 00:11:34.426331804 +0000 UTC m=+27.188903225" Sep 5 00:11:34.741424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d1041f189c6c0ba2439a5e5ec3a72823e7b15dade25ebe6eb5486e2ae68ad8d-rootfs.mount: Deactivated successfully. Sep 5 00:11:35.323777 kubelet[2479]: E0905 00:11:35.323324 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7qbm" podUID="9349fa40-4aa1-44d3-a3a7-8c6748ecbd04" Sep 5 00:11:37.312925 kubelet[2479]: E0905 00:11:37.312570 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7qbm" podUID="9349fa40-4aa1-44d3-a3a7-8c6748ecbd04" Sep 5 00:11:37.370626 containerd[1435]: time="2025-09-05T00:11:37.370547524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:37.383327 containerd[1435]: time="2025-09-05T00:11:37.383262310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 00:11:37.402072 containerd[1435]: time="2025-09-05T00:11:37.402029940Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:37.418915 containerd[1435]: time="2025-09-05T00:11:37.418861237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:37.419744 containerd[1435]: time="2025-09-05T00:11:37.419713100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.010110603s" Sep 5 00:11:37.419793 containerd[1435]: time="2025-09-05T00:11:37.419752421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 00:11:37.442953 containerd[1435]: time="2025-09-05T00:11:37.442909090Z" level=info msg="CreateContainer within sandbox \"d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:11:37.527511 containerd[1435]: time="2025-09-05T00:11:37.527453667Z" level=info msg="CreateContainer within sandbox \"d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328\"" Sep 5 00:11:37.528107 containerd[1435]: time="2025-09-05T00:11:37.528080964Z" level=info msg="StartContainer for \"c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328\"" Sep 5 00:11:37.560499 systemd[1]: Started cri-containerd-c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328.scope - libcontainer container c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328. Sep 5 00:11:37.588740 containerd[1435]: time="2025-09-05T00:11:37.588109755Z" level=info msg="StartContainer for \"c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328\" returns successfully" Sep 5 00:11:38.150269 systemd[1]: cri-containerd-c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328.scope: Deactivated successfully. Sep 5 00:11:38.168651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328-rootfs.mount: Deactivated successfully. Sep 5 00:11:38.172419 containerd[1435]: time="2025-09-05T00:11:38.172356434Z" level=info msg="shim disconnected" id=c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328 namespace=k8s.io Sep 5 00:11:38.172419 containerd[1435]: time="2025-09-05T00:11:38.172420195Z" level=warning msg="cleaning up after shim disconnected" id=c8d483dff20baa9f02089c75dc5ac6fd9e113db694e1308a2ee8c6d8799fe328 namespace=k8s.io Sep 5 00:11:38.172565 containerd[1435]: time="2025-09-05T00:11:38.172430836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:38.201545 kubelet[2479]: I0905 00:11:38.201509 2479 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:11:38.247929 systemd[1]: Created slice kubepods-besteffort-podb03c02b3_c57d_4b35_90a2_63009120bbea.slice - libcontainer container kubepods-besteffort-podb03c02b3_c57d_4b35_90a2_63009120bbea.slice. Sep 5 00:11:38.261233 systemd[1]: Created slice kubepods-burstable-podea063706_54f6_49f3_9a40_c61fd044db8b.slice - libcontainer container kubepods-burstable-podea063706_54f6_49f3_9a40_c61fd044db8b.slice. Sep 5 00:11:38.269031 systemd[1]: Created slice kubepods-burstable-podcc10bbf2_cfa3_42d8_98e2_b1172ebe9c0e.slice - libcontainer container kubepods-burstable-podcc10bbf2_cfa3_42d8_98e2_b1172ebe9c0e.slice. Sep 5 00:11:38.277006 systemd[1]: Created slice kubepods-besteffort-pod49d7d624_94b8_4c95_b7e4_d89b51613009.slice - libcontainer container kubepods-besteffort-pod49d7d624_94b8_4c95_b7e4_d89b51613009.slice. Sep 5 00:11:38.283103 systemd[1]: Created slice kubepods-besteffort-pod06a23f84_da38_460c_9db8_35e6a608cf99.slice - libcontainer container kubepods-besteffort-pod06a23f84_da38_460c_9db8_35e6a608cf99.slice. Sep 5 00:11:38.289190 systemd[1]: Created slice kubepods-besteffort-pod5167327f_f042_4e3c_b7e0_0cc3388932ec.slice - libcontainer container kubepods-besteffort-pod5167327f_f042_4e3c_b7e0_0cc3388932ec.slice. Sep 5 00:11:38.295088 systemd[1]: Created slice kubepods-besteffort-pod1c64de6c_5dc9_4e95_be35_a2e9b0b844f7.slice - libcontainer container kubepods-besteffort-pod1c64de6c_5dc9_4e95_be35_a2e9b0b844f7.slice. Sep 5 00:11:38.340244 kubelet[2479]: I0905 00:11:38.340191 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-ca-bundle\") pod \"whisker-dcc88dcbb-zdnrb\" (UID: \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\") " pod="calico-system/whisker-dcc88dcbb-zdnrb" Sep 5 00:11:38.340244 kubelet[2479]: I0905 00:11:38.340244 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljhl\" (UniqueName: \"kubernetes.io/projected/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-kube-api-access-8ljhl\") pod \"whisker-dcc88dcbb-zdnrb\" (UID: \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\") " pod="calico-system/whisker-dcc88dcbb-zdnrb" Sep 5 00:11:38.340647 kubelet[2479]: I0905 00:11:38.340264 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcw9\" (UniqueName: \"kubernetes.io/projected/cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e-kube-api-access-hzcw9\") pod \"coredns-674b8bbfcf-r9lrh\" (UID: \"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e\") " pod="kube-system/coredns-674b8bbfcf-r9lrh" Sep 5 00:11:38.340647 kubelet[2479]: I0905 00:11:38.340309 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5167327f-f042-4e3c-b7e0-0cc3388932ec-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-x9dsg\" (UID: \"5167327f-f042-4e3c-b7e0-0cc3388932ec\") " pod="calico-system/goldmane-54d579b49d-x9dsg" Sep 5 00:11:38.340647 kubelet[2479]: I0905 00:11:38.340337 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5167327f-f042-4e3c-b7e0-0cc3388932ec-goldmane-key-pair\") pod \"goldmane-54d579b49d-x9dsg\" (UID: \"5167327f-f042-4e3c-b7e0-0cc3388932ec\") " pod="calico-system/goldmane-54d579b49d-x9dsg" Sep 5 00:11:38.340647 kubelet[2479]: I0905 00:11:38.340353 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75t6j\" (UniqueName: \"kubernetes.io/projected/06a23f84-da38-460c-9db8-35e6a608cf99-kube-api-access-75t6j\") pod \"calico-apiserver-85ffc8f694-nf5lk\" (UID: \"06a23f84-da38-460c-9db8-35e6a608cf99\") " pod="calico-apiserver/calico-apiserver-85ffc8f694-nf5lk" Sep 5 00:11:38.340647 kubelet[2479]: I0905 00:11:38.340369 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e-config-volume\") pod \"coredns-674b8bbfcf-r9lrh\" (UID: \"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e\") " pod="kube-system/coredns-674b8bbfcf-r9lrh" Sep 5 00:11:38.340764 kubelet[2479]: I0905 00:11:38.340387 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-backend-key-pair\") pod \"whisker-dcc88dcbb-zdnrb\" (UID: \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\") " pod="calico-system/whisker-dcc88dcbb-zdnrb" Sep 5 00:11:38.340764 kubelet[2479]: I0905 00:11:38.340407 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b03c02b3-c57d-4b35-90a2-63009120bbea-tigera-ca-bundle\") pod \"calico-kube-controllers-5447c7486-kqmj4\" (UID: \"b03c02b3-c57d-4b35-90a2-63009120bbea\") " pod="calico-system/calico-kube-controllers-5447c7486-kqmj4" Sep 5 00:11:38.340764 kubelet[2479]: I0905 00:11:38.340425 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgt8p\" (UniqueName: \"kubernetes.io/projected/b03c02b3-c57d-4b35-90a2-63009120bbea-kube-api-access-dgt8p\") pod \"calico-kube-controllers-5447c7486-kqmj4\" (UID: \"b03c02b3-c57d-4b35-90a2-63009120bbea\") " pod="calico-system/calico-kube-controllers-5447c7486-kqmj4" Sep 5 00:11:38.340764 kubelet[2479]: I0905 00:11:38.340442 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49d7d624-94b8-4c95-b7e4-d89b51613009-calico-apiserver-certs\") pod \"calico-apiserver-85ffc8f694-8vh85\" (UID: \"49d7d624-94b8-4c95-b7e4-d89b51613009\") " pod="calico-apiserver/calico-apiserver-85ffc8f694-8vh85" Sep 5 00:11:38.340764 kubelet[2479]: I0905 00:11:38.340457 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njzp5\" (UniqueName: \"kubernetes.io/projected/49d7d624-94b8-4c95-b7e4-d89b51613009-kube-api-access-njzp5\") pod \"calico-apiserver-85ffc8f694-8vh85\" (UID: \"49d7d624-94b8-4c95-b7e4-d89b51613009\") " pod="calico-apiserver/calico-apiserver-85ffc8f694-8vh85" Sep 5 00:11:38.340872 kubelet[2479]: I0905 00:11:38.340483 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea063706-54f6-49f3-9a40-c61fd044db8b-config-volume\") pod \"coredns-674b8bbfcf-8r5bf\" (UID: \"ea063706-54f6-49f3-9a40-c61fd044db8b\") " pod="kube-system/coredns-674b8bbfcf-8r5bf" Sep 5 00:11:38.340872 kubelet[2479]: I0905 00:11:38.340498 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pg9t\" (UniqueName: \"kubernetes.io/projected/ea063706-54f6-49f3-9a40-c61fd044db8b-kube-api-access-6pg9t\") pod \"coredns-674b8bbfcf-8r5bf\" (UID: \"ea063706-54f6-49f3-9a40-c61fd044db8b\") " pod="kube-system/coredns-674b8bbfcf-8r5bf" Sep 5 00:11:38.340872 kubelet[2479]: I0905 00:11:38.340513 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpj5q\" (UniqueName: \"kubernetes.io/projected/5167327f-f042-4e3c-b7e0-0cc3388932ec-kube-api-access-cpj5q\") pod \"goldmane-54d579b49d-x9dsg\" (UID: \"5167327f-f042-4e3c-b7e0-0cc3388932ec\") " pod="calico-system/goldmane-54d579b49d-x9dsg" Sep 5 00:11:38.340872 kubelet[2479]: I0905 00:11:38.340530 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06a23f84-da38-460c-9db8-35e6a608cf99-calico-apiserver-certs\") pod \"calico-apiserver-85ffc8f694-nf5lk\" (UID: \"06a23f84-da38-460c-9db8-35e6a608cf99\") " pod="calico-apiserver/calico-apiserver-85ffc8f694-nf5lk" Sep 5 00:11:38.340872 kubelet[2479]: I0905 00:11:38.340548 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5167327f-f042-4e3c-b7e0-0cc3388932ec-config\") pod \"goldmane-54d579b49d-x9dsg\" (UID: \"5167327f-f042-4e3c-b7e0-0cc3388932ec\") " pod="calico-system/goldmane-54d579b49d-x9dsg" Sep 5 00:11:38.416970 containerd[1435]: time="2025-09-05T00:11:38.416825854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:11:38.556404 containerd[1435]: time="2025-09-05T00:11:38.556355158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447c7486-kqmj4,Uid:b03c02b3-c57d-4b35-90a2-63009120bbea,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:38.564851 kubelet[2479]: E0905 00:11:38.564512 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:38.565056 containerd[1435]: time="2025-09-05T00:11:38.565019625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8r5bf,Uid:ea063706-54f6-49f3-9a40-c61fd044db8b,Namespace:kube-system,Attempt:0,}" Sep 5 00:11:38.572590 kubelet[2479]: E0905 00:11:38.572557 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:38.573252 containerd[1435]: time="2025-09-05T00:11:38.573200760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lrh,Uid:cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e,Namespace:kube-system,Attempt:0,}" Sep 5 00:11:38.580560 containerd[1435]: time="2025-09-05T00:11:38.580480471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-8vh85,Uid:49d7d624-94b8-4c95-b7e4-d89b51613009,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:11:38.587596 containerd[1435]: time="2025-09-05T00:11:38.587142806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-nf5lk,Uid:06a23f84-da38-460c-9db8-35e6a608cf99,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:11:38.594521 containerd[1435]: time="2025-09-05T00:11:38.594113669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x9dsg,Uid:5167327f-f042-4e3c-b7e0-0cc3388932ec,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:38.600651 containerd[1435]: time="2025-09-05T00:11:38.600600160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dcc88dcbb-zdnrb,Uid:1c64de6c-5dc9-4e95-be35-a2e9b0b844f7,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:38.768739 kubelet[2479]: I0905 00:11:38.768430 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:38.768875 kubelet[2479]: E0905 00:11:38.768798 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:38.917994 containerd[1435]: time="2025-09-05T00:11:38.917941653Z" level=error msg="Failed to destroy network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.918986 containerd[1435]: time="2025-09-05T00:11:38.918612191Z" level=error msg="encountered an error cleaning up failed sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.918986 containerd[1435]: time="2025-09-05T00:11:38.918675152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x9dsg,Uid:5167327f-f042-4e3c-b7e0-0cc3388932ec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.920250 containerd[1435]: time="2025-09-05T00:11:38.920193712Z" level=error msg="Failed to destroy network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.921444 containerd[1435]: time="2025-09-05T00:11:38.921406904Z" level=error msg="encountered an error cleaning up failed sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.921691 containerd[1435]: time="2025-09-05T00:11:38.921654711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-nf5lk,Uid:06a23f84-da38-460c-9db8-35e6a608cf99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.922086 kubelet[2479]: E0905 00:11:38.922048 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.922158 kubelet[2479]: E0905 00:11:38.922115 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ffc8f694-nf5lk" Sep 5 00:11:38.922158 kubelet[2479]: E0905 00:11:38.922138 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ffc8f694-nf5lk" Sep 5 00:11:38.922423 kubelet[2479]: E0905 00:11:38.922195 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85ffc8f694-nf5lk_calico-apiserver(06a23f84-da38-460c-9db8-35e6a608cf99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85ffc8f694-nf5lk_calico-apiserver(06a23f84-da38-460c-9db8-35e6a608cf99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85ffc8f694-nf5lk" podUID="06a23f84-da38-460c-9db8-35e6a608cf99" Sep 5 00:11:38.924488 kubelet[2479]: E0905 00:11:38.924315 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.924488 kubelet[2479]: E0905 00:11:38.924387 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-x9dsg" Sep 5 00:11:38.924488 kubelet[2479]: E0905 00:11:38.924405 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-x9dsg" Sep 5 00:11:38.924611 kubelet[2479]: E0905 00:11:38.924453 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-x9dsg_calico-system(5167327f-f042-4e3c-b7e0-0cc3388932ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-x9dsg_calico-system(5167327f-f042-4e3c-b7e0-0cc3388932ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-x9dsg" podUID="5167327f-f042-4e3c-b7e0-0cc3388932ec" Sep 5 00:11:38.929884 containerd[1435]: time="2025-09-05T00:11:38.929541038Z" level=error msg="Failed to destroy network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.929884 containerd[1435]: time="2025-09-05T00:11:38.929548958Z" level=error msg="Failed to destroy network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.930079 containerd[1435]: time="2025-09-05T00:11:38.930051651Z" level=error msg="Failed to destroy network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.930301 containerd[1435]: time="2025-09-05T00:11:38.930263937Z" level=error msg="encountered an error cleaning up failed sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.930379 containerd[1435]: time="2025-09-05T00:11:38.930321138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447c7486-kqmj4,Uid:b03c02b3-c57d-4b35-90a2-63009120bbea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.930551 containerd[1435]: time="2025-09-05T00:11:38.930440461Z" level=error msg="encountered an error cleaning up failed sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.930551 containerd[1435]: time="2025-09-05T00:11:38.930473662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lrh,Uid:cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.930718 kubelet[2479]: E0905 00:11:38.930685 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.930767 kubelet[2479]: E0905 00:11:38.930746 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5447c7486-kqmj4" Sep 5 00:11:38.930816 kubelet[2479]: E0905 00:11:38.930766 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5447c7486-kqmj4" Sep 5 00:11:38.930847 kubelet[2479]: E0905 00:11:38.930820 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.931094 kubelet[2479]: E0905 00:11:38.930908 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5447c7486-kqmj4_calico-system(b03c02b3-c57d-4b35-90a2-63009120bbea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5447c7486-kqmj4_calico-system(b03c02b3-c57d-4b35-90a2-63009120bbea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5447c7486-kqmj4" podUID="b03c02b3-c57d-4b35-90a2-63009120bbea" Sep 5 00:11:38.931094 kubelet[2479]: E0905 00:11:38.930845 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r9lrh" Sep 5 00:11:38.931094 kubelet[2479]: E0905 00:11:38.930978 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r9lrh" Sep 5 00:11:38.931372 containerd[1435]: time="2025-09-05T00:11:38.931256723Z" level=error msg="encountered an error cleaning up failed sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.931372 containerd[1435]: time="2025-09-05T00:11:38.931322725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8r5bf,Uid:ea063706-54f6-49f3-9a40-c61fd044db8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.931869 kubelet[2479]: E0905 00:11:38.931837 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.931926 kubelet[2479]: E0905 00:11:38.931882 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8r5bf" Sep 5 00:11:38.931926 kubelet[2479]: E0905 00:11:38.931899 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8r5bf" Sep 5 00:11:38.931981 kubelet[2479]: E0905 00:11:38.931955 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8r5bf_kube-system(ea063706-54f6-49f3-9a40-c61fd044db8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8r5bf_kube-system(ea063706-54f6-49f3-9a40-c61fd044db8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8r5bf" podUID="ea063706-54f6-49f3-9a40-c61fd044db8b" Sep 5 00:11:38.932118 kubelet[2479]: E0905 00:11:38.932084 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r9lrh_kube-system(cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r9lrh_kube-system(cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r9lrh" podUID="cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e" Sep 5 00:11:38.943474 containerd[1435]: time="2025-09-05T00:11:38.943427842Z" level=error msg="Failed to destroy network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.943775 containerd[1435]: time="2025-09-05T00:11:38.943746891Z" level=error msg="encountered an error cleaning up failed sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.943827 containerd[1435]: time="2025-09-05T00:11:38.943807212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dcc88dcbb-zdnrb,Uid:1c64de6c-5dc9-4e95-be35-a2e9b0b844f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.944401 kubelet[2479]: E0905 00:11:38.944145 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.944401 kubelet[2479]: E0905 00:11:38.944197 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dcc88dcbb-zdnrb" Sep 5 00:11:38.944401 kubelet[2479]: E0905 00:11:38.944215 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dcc88dcbb-zdnrb" Sep 5 00:11:38.944529 kubelet[2479]: E0905 00:11:38.944266 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dcc88dcbb-zdnrb_calico-system(1c64de6c-5dc9-4e95-be35-a2e9b0b844f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dcc88dcbb-zdnrb_calico-system(1c64de6c-5dc9-4e95-be35-a2e9b0b844f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dcc88dcbb-zdnrb" podUID="1c64de6c-5dc9-4e95-be35-a2e9b0b844f7" Sep 5 00:11:38.950832 containerd[1435]: time="2025-09-05T00:11:38.950789356Z" level=error msg="Failed to destroy network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.951123 containerd[1435]: time="2025-09-05T00:11:38.951096364Z" level=error msg="encountered an error cleaning up failed sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.951169 containerd[1435]: time="2025-09-05T00:11:38.951146645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-8vh85,Uid:49d7d624-94b8-4c95-b7e4-d89b51613009,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.951411 kubelet[2479]: E0905 00:11:38.951373 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:38.951466 kubelet[2479]: E0905 00:11:38.951434 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ffc8f694-8vh85" Sep 5 00:11:38.951494 kubelet[2479]: E0905 00:11:38.951453 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ffc8f694-8vh85" Sep 5 00:11:38.951556 kubelet[2479]: E0905 00:11:38.951530 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85ffc8f694-8vh85_calico-apiserver(49d7d624-94b8-4c95-b7e4-d89b51613009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85ffc8f694-8vh85_calico-apiserver(49d7d624-94b8-4c95-b7e4-d89b51613009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85ffc8f694-8vh85" podUID="49d7d624-94b8-4c95-b7e4-d89b51613009" Sep 5 00:11:39.318794 systemd[1]: Created slice kubepods-besteffort-pod9349fa40_4aa1_44d3_a3a7_8c6748ecbd04.slice - libcontainer container kubepods-besteffort-pod9349fa40_4aa1_44d3_a3a7_8c6748ecbd04.slice. Sep 5 00:11:39.321432 containerd[1435]: time="2025-09-05T00:11:39.321387175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7qbm,Uid:9349fa40-4aa1-44d3-a3a7-8c6748ecbd04,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:39.370198 containerd[1435]: time="2025-09-05T00:11:39.370124214Z" level=error msg="Failed to destroy network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.370556 containerd[1435]: time="2025-09-05T00:11:39.370509263Z" level=error msg="encountered an error cleaning up failed sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.370599 containerd[1435]: time="2025-09-05T00:11:39.370571945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7qbm,Uid:9349fa40-4aa1-44d3-a3a7-8c6748ecbd04,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.370812 kubelet[2479]: E0905 00:11:39.370760 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.371066 kubelet[2479]: E0905 00:11:39.370824 2479 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x7qbm" Sep 5 00:11:39.371066 kubelet[2479]: E0905 00:11:39.370844 2479 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x7qbm" Sep 5 00:11:39.371066 kubelet[2479]: E0905 00:11:39.370886 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x7qbm_calico-system(9349fa40-4aa1-44d3-a3a7-8c6748ecbd04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x7qbm_calico-system(9349fa40-4aa1-44d3-a3a7-8c6748ecbd04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x7qbm" podUID="9349fa40-4aa1-44d3-a3a7-8c6748ecbd04" Sep 5 00:11:39.418906 containerd[1435]: time="2025-09-05T00:11:39.418857492Z" level=info msg="StopPodSandbox for \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\"" Sep 5 00:11:39.419197 containerd[1435]: time="2025-09-05T00:11:39.419097178Z" level=info msg="Ensure that sandbox df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2 in task-service has been cleanup successfully" Sep 5 00:11:39.420322 kubelet[2479]: I0905 00:11:39.419456 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:11:39.421315 kubelet[2479]: I0905 00:11:39.421061 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:11:39.422299 kubelet[2479]: I0905 00:11:39.422074 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:11:39.423873 containerd[1435]: time="2025-09-05T00:11:39.422564426Z" level=info msg="StopPodSandbox for \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\"" Sep 5 00:11:39.423873 containerd[1435]: time="2025-09-05T00:11:39.422660228Z" level=info msg="StopPodSandbox for \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\"" Sep 5 00:11:39.423873 containerd[1435]: time="2025-09-05T00:11:39.422725710Z" level=info msg="Ensure that sandbox b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c in task-service has been cleanup successfully" Sep 5 00:11:39.423873 containerd[1435]: time="2025-09-05T00:11:39.422795912Z" level=info msg="Ensure that sandbox 7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b in task-service has been cleanup successfully" Sep 5 00:11:39.425078 kubelet[2479]: I0905 00:11:39.425045 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:11:39.425710 containerd[1435]: time="2025-09-05T00:11:39.425480700Z" level=info msg="StopPodSandbox for \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\"" Sep 5 00:11:39.426748 containerd[1435]: time="2025-09-05T00:11:39.426707291Z" level=info msg="Ensure that sandbox b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6 in task-service has been cleanup successfully" Sep 5 00:11:39.426897 kubelet[2479]: I0905 00:11:39.426848 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:11:39.428817 containerd[1435]: time="2025-09-05T00:11:39.427933082Z" level=info msg="StopPodSandbox for \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\"" Sep 5 00:11:39.428817 containerd[1435]: time="2025-09-05T00:11:39.428091846Z" level=info msg="Ensure that sandbox 16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c in task-service has been cleanup successfully" Sep 5 00:11:39.428817 containerd[1435]: time="2025-09-05T00:11:39.428557178Z" level=info msg="StopPodSandbox for \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\"" Sep 5 00:11:39.428817 containerd[1435]: time="2025-09-05T00:11:39.428720702Z" level=info msg="Ensure that sandbox 01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08 in task-service has been cleanup successfully" Sep 5 00:11:39.428953 kubelet[2479]: I0905 00:11:39.428019 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:11:39.431049 kubelet[2479]: I0905 00:11:39.430524 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:11:39.431656 containerd[1435]: time="2025-09-05T00:11:39.431628136Z" level=info msg="StopPodSandbox for \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\"" Sep 5 00:11:39.432235 containerd[1435]: time="2025-09-05T00:11:39.432180510Z" level=info msg="Ensure that sandbox 2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b in task-service has been cleanup successfully" Sep 5 00:11:39.433330 kubelet[2479]: I0905 00:11:39.433232 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:11:39.435624 kubelet[2479]: E0905 00:11:39.435540 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:39.436376 containerd[1435]: time="2025-09-05T00:11:39.436323616Z" level=info msg="StopPodSandbox for \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\"" Sep 5 00:11:39.436862 containerd[1435]: time="2025-09-05T00:11:39.436494700Z" level=info msg="Ensure that sandbox 6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06 in task-service has been cleanup successfully" Sep 5 00:11:39.476880 containerd[1435]: time="2025-09-05T00:11:39.476832725Z" level=error msg="StopPodSandbox for \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\" failed" error="failed to destroy network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.477702 kubelet[2479]: E0905 00:11:39.477556 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:11:39.479890 containerd[1435]: time="2025-09-05T00:11:39.479476912Z" level=error msg="StopPodSandbox for \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\" failed" error="failed to destroy network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.481570 kubelet[2479]: E0905 00:11:39.481527 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:11:39.482699 kubelet[2479]: E0905 00:11:39.482637 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c"} Sep 5 00:11:39.482811 kubelet[2479]: E0905 00:11:39.482796 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea063706-54f6-49f3-9a40-c61fd044db8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.482915 kubelet[2479]: E0905 00:11:39.482897 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea063706-54f6-49f3-9a40-c61fd044db8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8r5bf" podUID="ea063706-54f6-49f3-9a40-c61fd044db8b" Sep 5 00:11:39.483464 kubelet[2479]: E0905 00:11:39.483445 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b"} Sep 5 00:11:39.483557 kubelet[2479]: E0905 00:11:39.483542 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5167327f-f042-4e3c-b7e0-0cc3388932ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.483653 kubelet[2479]: E0905 00:11:39.483636 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5167327f-f042-4e3c-b7e0-0cc3388932ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-x9dsg" podUID="5167327f-f042-4e3c-b7e0-0cc3388932ec" Sep 5 00:11:39.486868 containerd[1435]: time="2025-09-05T00:11:39.486760417Z" level=error msg="StopPodSandbox for \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\" failed" error="failed to destroy network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.487075 kubelet[2479]: E0905 00:11:39.487041 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:11:39.487124 kubelet[2479]: E0905 00:11:39.487080 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2"} Sep 5 00:11:39.487124 kubelet[2479]: E0905 00:11:39.487105 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.487208 kubelet[2479]: E0905 00:11:39.487124 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dcc88dcbb-zdnrb" podUID="1c64de6c-5dc9-4e95-be35-a2e9b0b844f7" Sep 5 00:11:39.487254 containerd[1435]: time="2025-09-05T00:11:39.487118066Z" level=error msg="StopPodSandbox for \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\" failed" error="failed to destroy network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.487399 kubelet[2479]: E0905 00:11:39.487367 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:11:39.487399 kubelet[2479]: E0905 00:11:39.487398 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6"} Sep 5 00:11:39.487485 kubelet[2479]: E0905 00:11:39.487418 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06a23f84-da38-460c-9db8-35e6a608cf99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.487485 kubelet[2479]: E0905 00:11:39.487439 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06a23f84-da38-460c-9db8-35e6a608cf99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85ffc8f694-nf5lk" podUID="06a23f84-da38-460c-9db8-35e6a608cf99" Sep 5 00:11:39.489654 containerd[1435]: time="2025-09-05T00:11:39.489534728Z" level=error msg="StopPodSandbox for \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\" failed" error="failed to destroy network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.489930 kubelet[2479]: E0905 00:11:39.489800 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:11:39.489930 kubelet[2479]: E0905 00:11:39.489838 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06"} Sep 5 00:11:39.489930 kubelet[2479]: E0905 00:11:39.489887 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b03c02b3-c57d-4b35-90a2-63009120bbea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.489930 kubelet[2479]: E0905 00:11:39.489907 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b03c02b3-c57d-4b35-90a2-63009120bbea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5447c7486-kqmj4" podUID="b03c02b3-c57d-4b35-90a2-63009120bbea" Sep 5 00:11:39.490083 containerd[1435]: time="2025-09-05T00:11:39.489988619Z" level=error msg="StopPodSandbox for \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\" failed" error="failed to destroy network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.490151 kubelet[2479]: E0905 00:11:39.490118 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:11:39.490190 kubelet[2479]: E0905 00:11:39.490151 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08"} Sep 5 00:11:39.490190 kubelet[2479]: E0905 00:11:39.490173 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.490246 kubelet[2479]: E0905 00:11:39.490192 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x7qbm" podUID="9349fa40-4aa1-44d3-a3a7-8c6748ecbd04" Sep 5 00:11:39.493741 containerd[1435]: time="2025-09-05T00:11:39.493688673Z" level=error msg="StopPodSandbox for \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\" failed" error="failed to destroy network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.494014 kubelet[2479]: E0905 00:11:39.493869 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:11:39.494014 kubelet[2479]: E0905 00:11:39.493908 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b"} Sep 5 00:11:39.494014 kubelet[2479]: E0905 00:11:39.493929 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.494014 kubelet[2479]: E0905 00:11:39.493948 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r9lrh" podUID="cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e" Sep 5 00:11:39.494218 containerd[1435]: time="2025-09-05T00:11:39.494187846Z" level=error msg="StopPodSandbox for \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\" failed" error="failed to destroy network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:11:39.494389 kubelet[2479]: E0905 00:11:39.494362 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:11:39.494496 kubelet[2479]: E0905 00:11:39.494396 2479 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c"} Sep 5 00:11:39.494496 kubelet[2479]: E0905 00:11:39.494433 2479 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49d7d624-94b8-4c95-b7e4-d89b51613009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:11:39.494496 kubelet[2479]: E0905 00:11:39.494452 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49d7d624-94b8-4c95-b7e4-d89b51613009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85ffc8f694-8vh85" podUID="49d7d624-94b8-4c95-b7e4-d89b51613009" Sep 5 00:11:39.517660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c-shm.mount: Deactivated successfully. Sep 5 00:11:39.517757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06-shm.mount: Deactivated successfully. Sep 5 00:11:42.328887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761584425.mount: Deactivated successfully. Sep 5 00:11:42.619235 containerd[1435]: time="2025-09-05T00:11:42.619118582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:42.619983 containerd[1435]: time="2025-09-05T00:11:42.619946002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 00:11:42.620648 containerd[1435]: time="2025-09-05T00:11:42.620623097Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:42.623309 containerd[1435]: time="2025-09-05T00:11:42.623155676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:42.623693 containerd[1435]: time="2025-09-05T00:11:42.623655367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.206785872s" Sep 5 00:11:42.623728 containerd[1435]: time="2025-09-05T00:11:42.623692608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 00:11:42.636110 containerd[1435]: time="2025-09-05T00:11:42.636069615Z" level=info msg="CreateContainer within sandbox \"d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:11:42.649976 containerd[1435]: time="2025-09-05T00:11:42.649925176Z" level=info msg="CreateContainer within sandbox \"d9018edf71186903e66be9b2f7cd9973ddcb261f6d673bd6426e4fce7e41a079\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7fd5dc0c3f361cfeae4bd2dc73949e861b8a1c97fc006582768750c8f21e269f\"" Sep 5 00:11:42.651174 containerd[1435]: time="2025-09-05T00:11:42.650456508Z" level=info msg="StartContainer for \"7fd5dc0c3f361cfeae4bd2dc73949e861b8a1c97fc006582768750c8f21e269f\"" Sep 5 00:11:42.701443 systemd[1]: Started cri-containerd-7fd5dc0c3f361cfeae4bd2dc73949e861b8a1c97fc006582768750c8f21e269f.scope - libcontainer container 7fd5dc0c3f361cfeae4bd2dc73949e861b8a1c97fc006582768750c8f21e269f. Sep 5 00:11:42.729344 containerd[1435]: time="2025-09-05T00:11:42.729304134Z" level=info msg="StartContainer for \"7fd5dc0c3f361cfeae4bd2dc73949e861b8a1c97fc006582768750c8f21e269f\" returns successfully" Sep 5 00:11:42.844585 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:11:42.844670 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:11:42.928135 containerd[1435]: time="2025-09-05T00:11:42.928012335Z" level=info msg="StopPodSandbox for \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\"" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.000 [INFO][3854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.001 [INFO][3854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" iface="eth0" netns="/var/run/netns/cni-7ccad9e8-8092-0a83-20a2-ef412161ddef" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.001 [INFO][3854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" iface="eth0" netns="/var/run/netns/cni-7ccad9e8-8092-0a83-20a2-ef412161ddef" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.003 [INFO][3854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" iface="eth0" netns="/var/run/netns/cni-7ccad9e8-8092-0a83-20a2-ef412161ddef" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.003 [INFO][3854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.003 [INFO][3854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.061 [INFO][3871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.061 [INFO][3871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.061 [INFO][3871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.073 [WARNING][3871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.073 [INFO][3871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.076 [INFO][3871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:43.082350 containerd[1435]: 2025-09-05 00:11:43.080 [INFO][3854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:11:43.082350 containerd[1435]: time="2025-09-05T00:11:43.082208452Z" level=info msg="TearDown network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\" successfully" Sep 5 00:11:43.082350 containerd[1435]: time="2025-09-05T00:11:43.082240733Z" level=info msg="StopPodSandbox for \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\" returns successfully" Sep 5 00:11:43.175753 kubelet[2479]: I0905 00:11:43.175702 2479 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-ca-bundle\") pod \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\" (UID: \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\") " Sep 5 00:11:43.175753 kubelet[2479]: I0905 00:11:43.175754 2479 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-backend-key-pair\") pod \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\" (UID: \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\") " Sep 5 00:11:43.176133 kubelet[2479]: I0905 00:11:43.175781 2479 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ljhl\" (UniqueName: \"kubernetes.io/projected/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-kube-api-access-8ljhl\") pod \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\" (UID: \"1c64de6c-5dc9-4e95-be35-a2e9b0b844f7\") " Sep 5 00:11:43.184560 kubelet[2479]: I0905 00:11:43.184459 2479 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1c64de6c-5dc9-4e95-be35-a2e9b0b844f7" (UID: "1c64de6c-5dc9-4e95-be35-a2e9b0b844f7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:11:43.185461 kubelet[2479]: I0905 00:11:43.185424 2479 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-kube-api-access-8ljhl" (OuterVolumeSpecName: "kube-api-access-8ljhl") pod "1c64de6c-5dc9-4e95-be35-a2e9b0b844f7" (UID: "1c64de6c-5dc9-4e95-be35-a2e9b0b844f7"). InnerVolumeSpecName "kube-api-access-8ljhl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:11:43.190482 kubelet[2479]: I0905 00:11:43.190440 2479 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1c64de6c-5dc9-4e95-be35-a2e9b0b844f7" (UID: "1c64de6c-5dc9-4e95-be35-a2e9b0b844f7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:11:43.276502 kubelet[2479]: I0905 00:11:43.276449 2479 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:43.276502 kubelet[2479]: I0905 00:11:43.276485 2479 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8ljhl\" (UniqueName: \"kubernetes.io/projected/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-kube-api-access-8ljhl\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:43.276502 kubelet[2479]: I0905 00:11:43.276494 2479 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:43.322561 systemd[1]: Removed slice kubepods-besteffort-pod1c64de6c_5dc9_4e95_be35_a2e9b0b844f7.slice - libcontainer container kubepods-besteffort-pod1c64de6c_5dc9_4e95_be35_a2e9b0b844f7.slice. Sep 5 00:11:43.329909 systemd[1]: run-netns-cni\x2d7ccad9e8\x2d8092\x2d0a83\x2d20a2\x2def412161ddef.mount: Deactivated successfully. Sep 5 00:11:43.329996 systemd[1]: var-lib-kubelet-pods-1c64de6c\x2d5dc9\x2d4e95\x2dbe35\x2da2e9b0b844f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ljhl.mount: Deactivated successfully. Sep 5 00:11:43.330055 systemd[1]: var-lib-kubelet-pods-1c64de6c\x2d5dc9\x2d4e95\x2dbe35\x2da2e9b0b844f7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:11:43.588601 kubelet[2479]: I0905 00:11:43.587556 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kwvbm" podStartSLOduration=2.285498077 podStartE2EDuration="13.58753986s" podCreationTimestamp="2025-09-05 00:11:30 +0000 UTC" firstStartedPulling="2025-09-05 00:11:31.322333801 +0000 UTC m=+24.084905222" lastFinishedPulling="2025-09-05 00:11:42.624375624 +0000 UTC m=+35.386947005" observedRunningTime="2025-09-05 00:11:43.587391657 +0000 UTC m=+36.349963078" watchObservedRunningTime="2025-09-05 00:11:43.58753986 +0000 UTC m=+36.350111241" Sep 5 00:11:43.618152 systemd[1]: Created slice kubepods-besteffort-pod18df299d_fdf1_4e18_970e_382f443308c4.slice - libcontainer container kubepods-besteffort-pod18df299d_fdf1_4e18_970e_382f443308c4.slice. Sep 5 00:11:43.678610 kubelet[2479]: I0905 00:11:43.678559 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18df299d-fdf1-4e18-970e-382f443308c4-whisker-backend-key-pair\") pod \"whisker-5bf485d45-p2jfx\" (UID: \"18df299d-fdf1-4e18-970e-382f443308c4\") " pod="calico-system/whisker-5bf485d45-p2jfx" Sep 5 00:11:43.678610 kubelet[2479]: I0905 00:11:43.678612 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18df299d-fdf1-4e18-970e-382f443308c4-whisker-ca-bundle\") pod \"whisker-5bf485d45-p2jfx\" (UID: \"18df299d-fdf1-4e18-970e-382f443308c4\") " pod="calico-system/whisker-5bf485d45-p2jfx" Sep 5 00:11:43.678786 kubelet[2479]: I0905 00:11:43.678689 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkqz\" (UniqueName: \"kubernetes.io/projected/18df299d-fdf1-4e18-970e-382f443308c4-kube-api-access-8tkqz\") pod \"whisker-5bf485d45-p2jfx\" (UID: \"18df299d-fdf1-4e18-970e-382f443308c4\") " pod="calico-system/whisker-5bf485d45-p2jfx" Sep 5 00:11:43.921368 containerd[1435]: time="2025-09-05T00:11:43.921248408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf485d45-p2jfx,Uid:18df299d-fdf1-4e18-970e-382f443308c4,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:44.029777 systemd-networkd[1373]: cali9069832c460: Link UP Sep 5 00:11:44.029968 systemd-networkd[1373]: cali9069832c460: Gained carrier Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.949 [INFO][3897] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.964 [INFO][3897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5bf485d45--p2jfx-eth0 whisker-5bf485d45- calico-system 18df299d-fdf1-4e18-970e-382f443308c4 919 0 2025-09-05 00:11:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bf485d45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5bf485d45-p2jfx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9069832c460 [] [] }} ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.964 [INFO][3897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.985 [INFO][3910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" HandleID="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Workload="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.985 [INFO][3910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" HandleID="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Workload="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5bf485d45-p2jfx", "timestamp":"2025-09-05 00:11:43.98535401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.985 [INFO][3910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.985 [INFO][3910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.985 [INFO][3910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:43.995 [INFO][3910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.000 [INFO][3910] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.003 [INFO][3910] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.005 [INFO][3910] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.006 [INFO][3910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.006 [INFO][3910] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.007 [INFO][3910] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4 Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.016 [INFO][3910] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.020 [INFO][3910] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.020 [INFO][3910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" host="localhost" Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.021 [INFO][3910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:44.041107 containerd[1435]: 2025-09-05 00:11:44.021 [INFO][3910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" HandleID="k8s-pod-network.f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Workload="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" Sep 5 00:11:44.041964 containerd[1435]: 2025-09-05 00:11:44.022 [INFO][3897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bf485d45--p2jfx-eth0", GenerateName:"whisker-5bf485d45-", Namespace:"calico-system", SelfLink:"", UID:"18df299d-fdf1-4e18-970e-382f443308c4", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bf485d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5bf485d45-p2jfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9069832c460", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:44.041964 containerd[1435]: 2025-09-05 00:11:44.022 [INFO][3897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" Sep 5 00:11:44.041964 containerd[1435]: 2025-09-05 00:11:44.023 [INFO][3897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9069832c460 ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" Sep 5 00:11:44.041964 containerd[1435]: 2025-09-05 00:11:44.030 [INFO][3897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" Sep 5 00:11:44.041964 containerd[1435]: 2025-09-05 00:11:44.031 [INFO][3897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bf485d45--p2jfx-eth0", GenerateName:"whisker-5bf485d45-", Namespace:"calico-system", SelfLink:"", UID:"18df299d-fdf1-4e18-970e-382f443308c4", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bf485d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4", Pod:"whisker-5bf485d45-p2jfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9069832c460", MAC:"02:a0:37:56:5c:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:44.041964 containerd[1435]: 2025-09-05 00:11:44.038 [INFO][3897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4" Namespace="calico-system" Pod="whisker-5bf485d45-p2jfx" WorkloadEndpoint="localhost-k8s-whisker--5bf485d45--p2jfx-eth0" Sep 5 00:11:44.054563 containerd[1435]: time="2025-09-05T00:11:44.054159405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:44.054674 containerd[1435]: time="2025-09-05T00:11:44.054554574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:44.054674 containerd[1435]: time="2025-09-05T00:11:44.054592335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:44.054756 containerd[1435]: time="2025-09-05T00:11:44.054703777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:44.076452 systemd[1]: Started cri-containerd-f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4.scope - libcontainer container f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4. Sep 5 00:11:44.085393 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:44.101391 containerd[1435]: time="2025-09-05T00:11:44.101360518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf485d45-p2jfx,Uid:18df299d-fdf1-4e18-970e-382f443308c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4\"" Sep 5 00:11:44.102851 containerd[1435]: time="2025-09-05T00:11:44.102820190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:11:44.322344 kernel: bpftool[4092]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:11:44.446898 kubelet[2479]: I0905 00:11:44.446864 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:44.508505 systemd-networkd[1373]: vxlan.calico: Link UP Sep 5 00:11:44.508513 systemd-networkd[1373]: vxlan.calico: Gained carrier Sep 5 00:11:45.177819 containerd[1435]: time="2025-09-05T00:11:45.177778247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:45.180122 containerd[1435]: time="2025-09-05T00:11:45.180063015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 00:11:45.185855 containerd[1435]: time="2025-09-05T00:11:45.185824018Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:45.187962 containerd[1435]: time="2025-09-05T00:11:45.187934223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:45.188827 containerd[1435]: time="2025-09-05T00:11:45.188798441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.085945451s" Sep 5 00:11:45.189237 containerd[1435]: time="2025-09-05T00:11:45.188832882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 00:11:45.193320 containerd[1435]: time="2025-09-05T00:11:45.193273697Z" level=info msg="CreateContainer within sandbox \"f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:11:45.211350 containerd[1435]: time="2025-09-05T00:11:45.211272440Z" level=info msg="CreateContainer within sandbox \"f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8fbf7f0c9f5ef4a9704ed637901d77980f6c5324bb0fb9fc9838a6ea543001cc\"" Sep 5 00:11:45.211882 containerd[1435]: time="2025-09-05T00:11:45.211856652Z" level=info msg="StartContainer for \"8fbf7f0c9f5ef4a9704ed637901d77980f6c5324bb0fb9fc9838a6ea543001cc\"" Sep 5 00:11:45.238435 systemd[1]: Started cri-containerd-8fbf7f0c9f5ef4a9704ed637901d77980f6c5324bb0fb9fc9838a6ea543001cc.scope - libcontainer container 8fbf7f0c9f5ef4a9704ed637901d77980f6c5324bb0fb9fc9838a6ea543001cc. Sep 5 00:11:45.263475 containerd[1435]: time="2025-09-05T00:11:45.263420271Z" level=info msg="StartContainer for \"8fbf7f0c9f5ef4a9704ed637901d77980f6c5324bb0fb9fc9838a6ea543001cc\" returns successfully" Sep 5 00:11:45.264600 containerd[1435]: time="2025-09-05T00:11:45.264369811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:11:45.314087 kubelet[2479]: I0905 00:11:45.314046 2479 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c64de6c-5dc9-4e95-be35-a2e9b0b844f7" path="/var/lib/kubelet/pods/1c64de6c-5dc9-4e95-be35-a2e9b0b844f7/volumes" Sep 5 00:11:45.792396 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Sep 5 00:11:46.048562 systemd-networkd[1373]: cali9069832c460: Gained IPv6LL Sep 5 00:11:46.716620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110328753.mount: Deactivated successfully. Sep 5 00:11:46.730527 containerd[1435]: time="2025-09-05T00:11:46.730456403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:46.731519 containerd[1435]: time="2025-09-05T00:11:46.731489825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 00:11:46.732320 containerd[1435]: time="2025-09-05T00:11:46.732268481Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:46.734737 containerd[1435]: time="2025-09-05T00:11:46.734378405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:46.735343 containerd[1435]: time="2025-09-05T00:11:46.735201622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.47079753s" Sep 5 00:11:46.735343 containerd[1435]: time="2025-09-05T00:11:46.735236982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 00:11:46.740038 containerd[1435]: time="2025-09-05T00:11:46.739863198Z" level=info msg="CreateContainer within sandbox \"f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:11:46.751464 containerd[1435]: time="2025-09-05T00:11:46.751420998Z" level=info msg="CreateContainer within sandbox \"f2e77209c8959f93304a654c2282ac19ecc328e01cfc8c59aff5b287f9b5e1b4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"83339a017891497c67398c123afac8d2e98fe5510169a2ed7dd6a775a33b8008\"" Sep 5 00:11:46.752314 containerd[1435]: time="2025-09-05T00:11:46.751967650Z" level=info msg="StartContainer for \"83339a017891497c67398c123afac8d2e98fe5510169a2ed7dd6a775a33b8008\"" Sep 5 00:11:46.791453 systemd[1]: Started cri-containerd-83339a017891497c67398c123afac8d2e98fe5510169a2ed7dd6a775a33b8008.scope - libcontainer container 83339a017891497c67398c123afac8d2e98fe5510169a2ed7dd6a775a33b8008. Sep 5 00:11:46.817534 containerd[1435]: time="2025-09-05T00:11:46.817418408Z" level=info msg="StartContainer for \"83339a017891497c67398c123afac8d2e98fe5510169a2ed7dd6a775a33b8008\" returns successfully" Sep 5 00:11:47.483732 kubelet[2479]: I0905 00:11:47.482350 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5bf485d45-p2jfx" podStartSLOduration=1.848820189 podStartE2EDuration="4.482333965s" podCreationTimestamp="2025-09-05 00:11:43 +0000 UTC" firstStartedPulling="2025-09-05 00:11:44.102579464 +0000 UTC m=+36.865150885" lastFinishedPulling="2025-09-05 00:11:46.73609324 +0000 UTC m=+39.498664661" observedRunningTime="2025-09-05 00:11:47.478430086 +0000 UTC m=+40.241001547" watchObservedRunningTime="2025-09-05 00:11:47.482333965 +0000 UTC m=+40.244905346" Sep 5 00:11:50.313627 containerd[1435]: time="2025-09-05T00:11:50.313303585Z" level=info msg="StopPodSandbox for \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\"" Sep 5 00:11:50.313947 containerd[1435]: time="2025-09-05T00:11:50.313326106Z" level=info msg="StopPodSandbox for \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\"" Sep 5 00:11:50.314869 containerd[1435]: time="2025-09-05T00:11:50.314414606Z" level=info msg="StopPodSandbox for \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\"" Sep 5 00:11:50.314869 containerd[1435]: time="2025-09-05T00:11:50.314587290Z" level=info msg="StopPodSandbox for \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\"" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.387 [INFO][4319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.388 [INFO][4319] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" iface="eth0" netns="/var/run/netns/cni-59f3ae10-6c94-c4cd-0ba3-68eb2bfe0b52" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.389 [INFO][4319] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" iface="eth0" netns="/var/run/netns/cni-59f3ae10-6c94-c4cd-0ba3-68eb2bfe0b52" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.390 [INFO][4319] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" iface="eth0" netns="/var/run/netns/cni-59f3ae10-6c94-c4cd-0ba3-68eb2bfe0b52" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.390 [INFO][4319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.390 [INFO][4319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.414 [INFO][4352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.414 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.414 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.426 [WARNING][4352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.426 [INFO][4352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.427 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.430830 containerd[1435]: 2025-09-05 00:11:50.429 [INFO][4319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:11:50.433389 containerd[1435]: time="2025-09-05T00:11:50.430994490Z" level=info msg="TearDown network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\" successfully" Sep 5 00:11:50.433389 containerd[1435]: time="2025-09-05T00:11:50.431030571Z" level=info msg="StopPodSandbox for \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\" returns successfully" Sep 5 00:11:50.433005 systemd[1]: run-netns-cni\x2d59f3ae10\x2d6c94\x2dc4cd\x2d0ba3\x2d68eb2bfe0b52.mount: Deactivated successfully. Sep 5 00:11:50.433671 containerd[1435]: time="2025-09-05T00:11:50.433504658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-nf5lk,Uid:06a23f84-da38-460c-9db8-35e6a608cf99,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.383 [INFO][4314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.384 [INFO][4314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" iface="eth0" netns="/var/run/netns/cni-35b16a57-7da9-4cc2-cc44-7674f1c93802" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.384 [INFO][4314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" iface="eth0" netns="/var/run/netns/cni-35b16a57-7da9-4cc2-cc44-7674f1c93802" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.384 [INFO][4314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" iface="eth0" netns="/var/run/netns/cni-35b16a57-7da9-4cc2-cc44-7674f1c93802" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.384 [INFO][4314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.384 [INFO][4314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.422 [INFO][4350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.422 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.427 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.439 [WARNING][4350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.439 [INFO][4350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.490 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.493535 containerd[1435]: 2025-09-05 00:11:50.491 [INFO][4314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:11:50.494066 containerd[1435]: time="2025-09-05T00:11:50.493729796Z" level=info msg="TearDown network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\" successfully" Sep 5 00:11:50.494066 containerd[1435]: time="2025-09-05T00:11:50.493869599Z" level=info msg="StopPodSandbox for \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\" returns successfully" Sep 5 00:11:50.495185 containerd[1435]: time="2025-09-05T00:11:50.495149703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-8vh85,Uid:49d7d624-94b8-4c95-b7e4-d89b51613009,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:11:50.496454 systemd[1]: run-netns-cni\x2d35b16a57\x2d7da9\x2d4cc2\x2dcc44\x2d7674f1c93802.mount: Deactivated successfully. Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.388 [INFO][4320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.388 [INFO][4320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" iface="eth0" netns="/var/run/netns/cni-6145f8ba-a64a-607d-d6f4-c420b444e730" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.388 [INFO][4320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" iface="eth0" netns="/var/run/netns/cni-6145f8ba-a64a-607d-d6f4-c420b444e730" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.388 [INFO][4320] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" iface="eth0" netns="/var/run/netns/cni-6145f8ba-a64a-607d-d6f4-c420b444e730" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.389 [INFO][4320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.389 [INFO][4320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.428 [INFO][4354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.428 [INFO][4354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.490 [INFO][4354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.501 [WARNING][4354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.501 [INFO][4354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.503 [INFO][4354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.507318 containerd[1435]: 2025-09-05 00:11:50.505 [INFO][4320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:11:50.509584 systemd[1]: run-netns-cni\x2d6145f8ba\x2da64a\x2d607d\x2dd6f4\x2dc420b444e730.mount: Deactivated successfully. Sep 5 00:11:50.511458 containerd[1435]: time="2025-09-05T00:11:50.511401890Z" level=info msg="TearDown network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\" successfully" Sep 5 00:11:50.511458 containerd[1435]: time="2025-09-05T00:11:50.511455691Z" level=info msg="StopPodSandbox for \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\" returns successfully" Sep 5 00:11:50.512508 containerd[1435]: time="2025-09-05T00:11:50.512477511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7qbm,Uid:9349fa40-4aa1-44d3-a3a7-8c6748ecbd04,Namespace:calico-system,Attempt:1,}" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.387 [INFO][4312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.388 [INFO][4312] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" iface="eth0" netns="/var/run/netns/cni-ea22ead6-6449-24da-fb3b-1fb352bdbe21" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.390 [INFO][4312] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" iface="eth0" netns="/var/run/netns/cni-ea22ead6-6449-24da-fb3b-1fb352bdbe21" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.393 [INFO][4312] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" iface="eth0" netns="/var/run/netns/cni-ea22ead6-6449-24da-fb3b-1fb352bdbe21" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.393 [INFO][4312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.393 [INFO][4312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.432 [INFO][4368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.433 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.503 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.516 [WARNING][4368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.517 [INFO][4368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.519 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.524973 containerd[1435]: 2025-09-05 00:11:50.521 [INFO][4312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:11:50.525750 containerd[1435]: time="2025-09-05T00:11:50.525688280Z" level=info msg="TearDown network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\" successfully" Sep 5 00:11:50.525842 containerd[1435]: time="2025-09-05T00:11:50.525804403Z" level=info msg="StopPodSandbox for \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\" returns successfully" Sep 5 00:11:50.526451 kubelet[2479]: E0905 00:11:50.526426 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:50.526829 containerd[1435]: time="2025-09-05T00:11:50.526746980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lrh,Uid:cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e,Namespace:kube-system,Attempt:1,}" Sep 5 00:11:50.652775 systemd-networkd[1373]: calica7e277e422: Link UP Sep 5 00:11:50.653046 systemd-networkd[1373]: calica7e277e422: Gained carrier Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.561 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0 calico-apiserver-85ffc8f694- calico-apiserver 06a23f84-da38-460c-9db8-35e6a608cf99 958 0 2025-09-05 00:11:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85ffc8f694 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85ffc8f694-nf5lk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica7e277e422 [] [] }} ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.561 [INFO][4385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.607 [INFO][4440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" HandleID="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.607 [INFO][4440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" HandleID="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85ffc8f694-nf5lk", "timestamp":"2025-09-05 00:11:50.607562868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.607 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.607 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.607 [INFO][4440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.620 [INFO][4440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.628 [INFO][4440] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.632 [INFO][4440] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.634 [INFO][4440] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.636 [INFO][4440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.636 [INFO][4440] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.638 [INFO][4440] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.643 [INFO][4440] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.647 [INFO][4440] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.647 [INFO][4440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" host="localhost" Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.647 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.672514 containerd[1435]: 2025-09-05 00:11:50.647 [INFO][4440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" HandleID="k8s-pod-network.8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.673222 containerd[1435]: 2025-09-05 00:11:50.650 [INFO][4385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a23f84-da38-460c-9db8-35e6a608cf99", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85ffc8f694-nf5lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica7e277e422", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.673222 containerd[1435]: 2025-09-05 00:11:50.650 [INFO][4385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.673222 containerd[1435]: 2025-09-05 00:11:50.650 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica7e277e422 ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.673222 containerd[1435]: 2025-09-05 00:11:50.651 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.673222 containerd[1435]: 2025-09-05 00:11:50.652 [INFO][4385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a23f84-da38-460c-9db8-35e6a608cf99", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b", Pod:"calico-apiserver-85ffc8f694-nf5lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica7e277e422", MAC:"ba:e5:3d:d9:92:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.673222 containerd[1435]: 2025-09-05 00:11:50.663 [INFO][4385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-nf5lk" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:11:50.689979 containerd[1435]: time="2025-09-05T00:11:50.689632780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:50.691125 containerd[1435]: time="2025-09-05T00:11:50.689957906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:50.691125 containerd[1435]: time="2025-09-05T00:11:50.689970266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:50.691125 containerd[1435]: time="2025-09-05T00:11:50.690039107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:50.718520 systemd[1]: Started cri-containerd-8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b.scope - libcontainer container 8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b. Sep 5 00:11:50.729486 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:50.755744 containerd[1435]: time="2025-09-05T00:11:50.755696668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-nf5lk,Uid:06a23f84-da38-460c-9db8-35e6a608cf99,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b\"" Sep 5 00:11:50.760321 containerd[1435]: time="2025-09-05T00:11:50.760261595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:11:50.760922 systemd-networkd[1373]: cali8f6e1cdbae3: Link UP Sep 5 00:11:50.762245 systemd-networkd[1373]: cali8f6e1cdbae3: Gained carrier Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.587 [INFO][4398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0 calico-apiserver-85ffc8f694- calico-apiserver 49d7d624-94b8-4c95-b7e4-d89b51613009 957 0 2025-09-05 00:11:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85ffc8f694 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85ffc8f694-8vh85 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f6e1cdbae3 [] [] }} ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.587 [INFO][4398] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.622 [INFO][4447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" HandleID="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.622 [INFO][4447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" HandleID="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85ffc8f694-8vh85", "timestamp":"2025-09-05 00:11:50.622181865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.622 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.648 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.648 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.722 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.727 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.732 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.734 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.738 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.738 [INFO][4447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.740 [INFO][4447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5 Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.745 [INFO][4447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.751 [INFO][4447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.751 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" host="localhost" Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.751 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.774410 containerd[1435]: 2025-09-05 00:11:50.751 [INFO][4447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" HandleID="k8s-pod-network.e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.775050 containerd[1435]: 2025-09-05 00:11:50.756 [INFO][4398] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d7d624-94b8-4c95-b7e4-d89b51613009", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85ffc8f694-8vh85", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f6e1cdbae3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.775050 containerd[1435]: 2025-09-05 00:11:50.756 [INFO][4398] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.775050 containerd[1435]: 2025-09-05 00:11:50.756 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f6e1cdbae3 ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.775050 containerd[1435]: 2025-09-05 00:11:50.761 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.775050 containerd[1435]: 2025-09-05 00:11:50.762 [INFO][4398] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d7d624-94b8-4c95-b7e4-d89b51613009", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5", Pod:"calico-apiserver-85ffc8f694-8vh85", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f6e1cdbae3", MAC:"72:60:84:18:22:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.775050 containerd[1435]: 2025-09-05 00:11:50.771 [INFO][4398] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5" Namespace="calico-apiserver" Pod="calico-apiserver-85ffc8f694-8vh85" WorkloadEndpoint="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:11:50.791171 containerd[1435]: time="2025-09-05T00:11:50.790701730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:50.791171 containerd[1435]: time="2025-09-05T00:11:50.790805132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:50.791171 containerd[1435]: time="2025-09-05T00:11:50.790825373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:50.791171 containerd[1435]: time="2025-09-05T00:11:50.790904894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:50.817448 systemd[1]: Started cri-containerd-e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5.scope - libcontainer container e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5. Sep 5 00:11:50.828877 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:50.860709 systemd-networkd[1373]: cali1a2fe71483b: Link UP Sep 5 00:11:50.860920 systemd-networkd[1373]: cali1a2fe71483b: Gained carrier Sep 5 00:11:50.870031 containerd[1435]: time="2025-09-05T00:11:50.869999469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ffc8f694-8vh85,Uid:49d7d624-94b8-4c95-b7e4-d89b51613009,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5\"" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.591 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x7qbm-eth0 csi-node-driver- calico-system 9349fa40-4aa1-44d3-a3a7-8c6748ecbd04 956 0 2025-09-05 00:11:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x7qbm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a2fe71483b [] [] }} ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.591 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.631 [INFO][4455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" HandleID="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.631 [INFO][4455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" HandleID="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000392170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x7qbm", "timestamp":"2025-09-05 00:11:50.631248676 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.631 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.752 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.752 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.823 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.832 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.836 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.838 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.840 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.840 [INFO][4455] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.842 [INFO][4455] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.845 [INFO][4455] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.852 [INFO][4455] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.852 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" host="localhost" Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.852 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.880335 containerd[1435]: 2025-09-05 00:11:50.852 [INFO][4455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" HandleID="k8s-pod-network.d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.881668 containerd[1435]: 2025-09-05 00:11:50.854 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x7qbm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x7qbm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a2fe71483b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.881668 containerd[1435]: 2025-09-05 00:11:50.854 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.881668 containerd[1435]: 2025-09-05 00:11:50.854 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a2fe71483b ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.881668 containerd[1435]: 2025-09-05 00:11:50.859 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.881668 containerd[1435]: 2025-09-05 00:11:50.865 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x7qbm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c", Pod:"csi-node-driver-x7qbm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a2fe71483b", MAC:"ee:66:fa:1e:eb:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.881668 containerd[1435]: 2025-09-05 00:11:50.876 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c" Namespace="calico-system" Pod="csi-node-driver-x7qbm" WorkloadEndpoint="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:11:50.896533 containerd[1435]: time="2025-09-05T00:11:50.896437529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:50.896533 containerd[1435]: time="2025-09-05T00:11:50.896484410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:50.896533 containerd[1435]: time="2025-09-05T00:11:50.896503130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:50.896779 containerd[1435]: time="2025-09-05T00:11:50.896583492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:50.914433 systemd[1]: Started cri-containerd-d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c.scope - libcontainer container d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c. Sep 5 00:11:50.924833 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:50.935141 containerd[1435]: time="2025-09-05T00:11:50.935109020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7qbm,Uid:9349fa40-4aa1-44d3-a3a7-8c6748ecbd04,Namespace:calico-system,Attempt:1,} returns sandbox id \"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c\"" Sep 5 00:11:50.957920 systemd-networkd[1373]: cali4ea63c837ee: Link UP Sep 5 00:11:50.958855 systemd-networkd[1373]: cali4ea63c837ee: Gained carrier Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.601 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0 coredns-674b8bbfcf- kube-system cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e 959 0 2025-09-05 00:11:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-r9lrh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ea63c837ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.601 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.644 [INFO][4462] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" HandleID="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.644 [INFO][4462] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" HandleID="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000435b40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-r9lrh", "timestamp":"2025-09-05 00:11:50.644030798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.644 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.852 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.852 [INFO][4462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.923 [INFO][4462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.931 [INFO][4462] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.939 [INFO][4462] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.940 [INFO][4462] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.942 [INFO][4462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.942 [INFO][4462] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.944 [INFO][4462] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82 Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.947 [INFO][4462] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.953 [INFO][4462] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.953 [INFO][4462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" host="localhost" Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.953 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:50.973363 containerd[1435]: 2025-09-05 00:11:50.953 [INFO][4462] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" HandleID="k8s-pod-network.4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.974770 containerd[1435]: 2025-09-05 00:11:50.955 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-r9lrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ea63c837ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.974770 containerd[1435]: 2025-09-05 00:11:50.955 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.974770 containerd[1435]: 2025-09-05 00:11:50.955 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ea63c837ee ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.974770 containerd[1435]: 2025-09-05 00:11:50.959 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.974770 containerd[1435]: 2025-09-05 00:11:50.960 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82", Pod:"coredns-674b8bbfcf-r9lrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ea63c837ee", MAC:"d2:63:28:d0:5d:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:50.974770 containerd[1435]: 2025-09-05 00:11:50.970 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:11:50.988788 containerd[1435]: time="2025-09-05T00:11:50.988621912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:50.988788 containerd[1435]: time="2025-09-05T00:11:50.988682673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:50.988788 containerd[1435]: time="2025-09-05T00:11:50.988702433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:50.990408 containerd[1435]: time="2025-09-05T00:11:50.989278844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:51.008443 systemd[1]: Started cri-containerd-4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82.scope - libcontainer container 4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82. Sep 5 00:11:51.018895 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:51.040208 containerd[1435]: time="2025-09-05T00:11:51.040168951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lrh,Uid:cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82\"" Sep 5 00:11:51.041195 kubelet[2479]: E0905 00:11:51.041006 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:51.044882 containerd[1435]: time="2025-09-05T00:11:51.044847797Z" level=info msg="CreateContainer within sandbox \"4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:11:51.058188 containerd[1435]: time="2025-09-05T00:11:51.058154163Z" level=info msg="CreateContainer within sandbox \"4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44104296edd93c081691d384ba6d61545e7e50a19570cc4cb02c7a60e1d0dd41\"" Sep 5 00:11:51.058669 containerd[1435]: time="2025-09-05T00:11:51.058641692Z" level=info msg="StartContainer for \"44104296edd93c081691d384ba6d61545e7e50a19570cc4cb02c7a60e1d0dd41\"" Sep 5 00:11:51.088457 systemd[1]: Started cri-containerd-44104296edd93c081691d384ba6d61545e7e50a19570cc4cb02c7a60e1d0dd41.scope - libcontainer container 44104296edd93c081691d384ba6d61545e7e50a19570cc4cb02c7a60e1d0dd41. Sep 5 00:11:51.109687 containerd[1435]: time="2025-09-05T00:11:51.109647597Z" level=info msg="StartContainer for \"44104296edd93c081691d384ba6d61545e7e50a19570cc4cb02c7a60e1d0dd41\" returns successfully" Sep 5 00:11:51.313235 containerd[1435]: time="2025-09-05T00:11:51.313193605Z" level=info msg="StopPodSandbox for \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\"" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.354 [INFO][4725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.354 [INFO][4725] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" iface="eth0" netns="/var/run/netns/cni-e890b11d-27ac-5137-47c5-528687baf8af" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.354 [INFO][4725] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" iface="eth0" netns="/var/run/netns/cni-e890b11d-27ac-5137-47c5-528687baf8af" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.354 [INFO][4725] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" iface="eth0" netns="/var/run/netns/cni-e890b11d-27ac-5137-47c5-528687baf8af" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.354 [INFO][4725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.354 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.373 [INFO][4734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.374 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.374 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.382 [WARNING][4734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.382 [INFO][4734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.386 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:51.389508 containerd[1435]: 2025-09-05 00:11:51.387 [INFO][4725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:11:51.390349 containerd[1435]: time="2025-09-05T00:11:51.389634860Z" level=info msg="TearDown network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\" successfully" Sep 5 00:11:51.390349 containerd[1435]: time="2025-09-05T00:11:51.389657540Z" level=info msg="StopPodSandbox for \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\" returns successfully" Sep 5 00:11:51.390349 containerd[1435]: time="2025-09-05T00:11:51.390207190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x9dsg,Uid:5167327f-f042-4e3c-b7e0-0cc3388932ec,Namespace:calico-system,Attempt:1,}" Sep 5 00:11:51.442512 systemd[1]: run-netns-cni\x2de890b11d\x2d27ac\x2d5137\x2d47c5\x2d528687baf8af.mount: Deactivated successfully. Sep 5 00:11:51.442612 systemd[1]: run-netns-cni\x2dea22ead6\x2d6449\x2d24da\x2dfb3b\x2d1fb352bdbe21.mount: Deactivated successfully. Sep 5 00:11:51.476885 kubelet[2479]: E0905 00:11:51.476856 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:51.490452 kubelet[2479]: I0905 00:11:51.490387 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r9lrh" podStartSLOduration=39.490370924 podStartE2EDuration="39.490370924s" podCreationTimestamp="2025-09-05 00:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:51.489658791 +0000 UTC m=+44.252230212" watchObservedRunningTime="2025-09-05 00:11:51.490370924 +0000 UTC m=+44.252942305" Sep 5 00:11:51.510656 systemd-networkd[1373]: cali873cbfecd3e: Link UP Sep 5 00:11:51.512188 systemd-networkd[1373]: cali873cbfecd3e: Gained carrier Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.435 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--x9dsg-eth0 goldmane-54d579b49d- calico-system 5167327f-f042-4e3c-b7e0-0cc3388932ec 983 0 2025-09-05 00:11:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-x9dsg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali873cbfecd3e [] [] }} ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.435 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.460 [INFO][4756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" HandleID="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.460 [INFO][4756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" HandleID="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005ac4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-x9dsg", "timestamp":"2025-09-05 00:11:51.460665214 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.460 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.460 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.460 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.470 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.475 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.481 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.483 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.485 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.485 [INFO][4756] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.487 [INFO][4756] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.495 [INFO][4756] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.501 [INFO][4756] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.501 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" host="localhost" Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.501 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:51.526902 containerd[1435]: 2025-09-05 00:11:51.501 [INFO][4756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" HandleID="k8s-pod-network.dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.527546 containerd[1435]: 2025-09-05 00:11:51.506 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x9dsg-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5167327f-f042-4e3c-b7e0-0cc3388932ec", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-x9dsg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali873cbfecd3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:51.527546 containerd[1435]: 2025-09-05 00:11:51.507 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.527546 containerd[1435]: 2025-09-05 00:11:51.507 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali873cbfecd3e ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.527546 containerd[1435]: 2025-09-05 00:11:51.513 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.527546 containerd[1435]: 2025-09-05 00:11:51.514 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x9dsg-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5167327f-f042-4e3c-b7e0-0cc3388932ec", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b", Pod:"goldmane-54d579b49d-x9dsg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali873cbfecd3e", MAC:"62:db:ee:ac:5d:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:51.527546 containerd[1435]: 2025-09-05 00:11:51.524 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b" Namespace="calico-system" Pod="goldmane-54d579b49d-x9dsg" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:11:51.542690 containerd[1435]: time="2025-09-05T00:11:51.542584771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:51.542690 containerd[1435]: time="2025-09-05T00:11:51.542653932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:51.542690 containerd[1435]: time="2025-09-05T00:11:51.542669852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:51.542855 containerd[1435]: time="2025-09-05T00:11:51.542757654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:51.569423 systemd[1]: Started cri-containerd-dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b.scope - libcontainer container dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b. Sep 5 00:11:51.588738 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:51.604684 containerd[1435]: time="2025-09-05T00:11:51.604610319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x9dsg,Uid:5167327f-f042-4e3c-b7e0-0cc3388932ec,Namespace:calico-system,Attempt:1,} returns sandbox id \"dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b\"" Sep 5 00:11:52.129072 systemd-networkd[1373]: calica7e277e422: Gained IPv6LL Sep 5 00:11:52.192504 systemd-networkd[1373]: cali4ea63c837ee: Gained IPv6LL Sep 5 00:11:52.313089 containerd[1435]: time="2025-09-05T00:11:52.313049798Z" level=info msg="StopPodSandbox for \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\"" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.367 [INFO][4834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.367 [INFO][4834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" iface="eth0" netns="/var/run/netns/cni-2ab45606-8644-51c4-5cd8-fa9ef2f8faf2" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.367 [INFO][4834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" iface="eth0" netns="/var/run/netns/cni-2ab45606-8644-51c4-5cd8-fa9ef2f8faf2" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.367 [INFO][4834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" iface="eth0" netns="/var/run/netns/cni-2ab45606-8644-51c4-5cd8-fa9ef2f8faf2" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.367 [INFO][4834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.367 [INFO][4834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.388 [INFO][4843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.388 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.388 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.397 [WARNING][4843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.397 [INFO][4843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.399 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:52.407211 containerd[1435]: 2025-09-05 00:11:52.404 [INFO][4834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:11:52.410666 systemd[1]: run-netns-cni\x2d2ab45606\x2d8644\x2d51c4\x2d5cd8\x2dfa9ef2f8faf2.mount: Deactivated successfully. Sep 5 00:11:52.411585 containerd[1435]: time="2025-09-05T00:11:52.410780291Z" level=info msg="TearDown network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\" successfully" Sep 5 00:11:52.411585 containerd[1435]: time="2025-09-05T00:11:52.410810851Z" level=info msg="StopPodSandbox for \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\" returns successfully" Sep 5 00:11:52.411659 kubelet[2479]: E0905 00:11:52.411375 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:52.413770 containerd[1435]: time="2025-09-05T00:11:52.413406618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8r5bf,Uid:ea063706-54f6-49f3-9a40-c61fd044db8b,Namespace:kube-system,Attempt:1,}" Sep 5 00:11:52.493480 kubelet[2479]: E0905 00:11:52.493452 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:52.514795 systemd-networkd[1373]: cali8f6e1cdbae3: Gained IPv6LL Sep 5 00:11:52.595319 containerd[1435]: time="2025-09-05T00:11:52.595251077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:52.595617 containerd[1435]: time="2025-09-05T00:11:52.595575683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 00:11:52.596400 containerd[1435]: time="2025-09-05T00:11:52.596345617Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:52.597614 systemd-networkd[1373]: cali6c697833cdb: Link UP Sep 5 00:11:52.597920 systemd-networkd[1373]: cali6c697833cdb: Gained carrier Sep 5 00:11:52.599626 containerd[1435]: time="2025-09-05T00:11:52.599579596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:52.600354 containerd[1435]: time="2025-09-05T00:11:52.600276129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.839962372s" Sep 5 00:11:52.600487 containerd[1435]: time="2025-09-05T00:11:52.600355370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:11:52.602658 containerd[1435]: time="2025-09-05T00:11:52.602629531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:11:52.605068 containerd[1435]: time="2025-09-05T00:11:52.605036495Z" level=info msg="CreateContainer within sandbox \"8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.534 [INFO][4852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0 coredns-674b8bbfcf- kube-system ea063706-54f6-49f3-9a40-c61fd044db8b 999 0 2025-09-05 00:11:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8r5bf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c697833cdb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.534 [INFO][4852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.559 [INFO][4866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" HandleID="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.559 [INFO][4866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" HandleID="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8r5bf", "timestamp":"2025-09-05 00:11:52.55958911 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.559 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.559 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.559 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.569 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.573 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.576 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.578 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.582 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.582 [INFO][4866] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.583 [INFO][4866] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73 Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.586 [INFO][4866] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.593 [INFO][4866] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.593 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" host="localhost" Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.593 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:52.615723 containerd[1435]: 2025-09-05 00:11:52.593 [INFO][4866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" HandleID="k8s-pod-network.aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.616218 containerd[1435]: 2025-09-05 00:11:52.596 [INFO][4852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea063706-54f6-49f3-9a40-c61fd044db8b", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8r5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c697833cdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:52.616218 containerd[1435]: 2025-09-05 00:11:52.596 [INFO][4852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.616218 containerd[1435]: 2025-09-05 00:11:52.596 [INFO][4852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c697833cdb ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.616218 containerd[1435]: 2025-09-05 00:11:52.598 [INFO][4852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.616218 containerd[1435]: 2025-09-05 00:11:52.598 [INFO][4852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea063706-54f6-49f3-9a40-c61fd044db8b", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73", Pod:"coredns-674b8bbfcf-8r5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c697833cdb", MAC:"c2:44:28:a8:6f:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:52.616218 containerd[1435]: 2025-09-05 00:11:52.612 [INFO][4852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73" Namespace="kube-system" Pod="coredns-674b8bbfcf-8r5bf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:11:52.621216 containerd[1435]: time="2025-09-05T00:11:52.621180788Z" level=info msg="CreateContainer within sandbox \"8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1258eae1c4dbee9ef42b195973edcb405b3cdf23b4bd0ba4623e12643bb1a923\"" Sep 5 00:11:52.622595 containerd[1435]: time="2025-09-05T00:11:52.621845040Z" level=info msg="StartContainer for \"1258eae1c4dbee9ef42b195973edcb405b3cdf23b4bd0ba4623e12643bb1a923\"" Sep 5 00:11:52.648406 containerd[1435]: time="2025-09-05T00:11:52.648330680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:52.648406 containerd[1435]: time="2025-09-05T00:11:52.648387201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:52.648406 containerd[1435]: time="2025-09-05T00:11:52.648397882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:52.648600 containerd[1435]: time="2025-09-05T00:11:52.648475563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:52.652606 systemd[1]: Started cri-containerd-1258eae1c4dbee9ef42b195973edcb405b3cdf23b4bd0ba4623e12643bb1a923.scope - libcontainer container 1258eae1c4dbee9ef42b195973edcb405b3cdf23b4bd0ba4623e12643bb1a923. Sep 5 00:11:52.670432 systemd[1]: Started cri-containerd-aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73.scope - libcontainer container aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73. Sep 5 00:11:52.683610 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:52.699690 containerd[1435]: time="2025-09-05T00:11:52.699648011Z" level=info msg="StartContainer for \"1258eae1c4dbee9ef42b195973edcb405b3cdf23b4bd0ba4623e12643bb1a923\" returns successfully" Sep 5 00:11:52.703844 containerd[1435]: time="2025-09-05T00:11:52.703800047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8r5bf,Uid:ea063706-54f6-49f3-9a40-c61fd044db8b,Namespace:kube-system,Attempt:1,} returns sandbox id \"aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73\"" Sep 5 00:11:52.705176 kubelet[2479]: E0905 00:11:52.704817 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:52.710378 containerd[1435]: time="2025-09-05T00:11:52.710304885Z" level=info msg="CreateContainer within sandbox \"aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:11:52.724482 containerd[1435]: time="2025-09-05T00:11:52.724440901Z" level=info msg="CreateContainer within sandbox \"aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05514f8155b2f43963e16f65444b28983a37c5a0d78fda03c8e7e1e591f6cd77\"" Sep 5 00:11:52.725311 containerd[1435]: time="2025-09-05T00:11:52.724909870Z" level=info msg="StartContainer for \"05514f8155b2f43963e16f65444b28983a37c5a0d78fda03c8e7e1e591f6cd77\"" Sep 5 00:11:52.751445 systemd[1]: Started cri-containerd-05514f8155b2f43963e16f65444b28983a37c5a0d78fda03c8e7e1e591f6cd77.scope - libcontainer container 05514f8155b2f43963e16f65444b28983a37c5a0d78fda03c8e7e1e591f6cd77. Sep 5 00:11:52.780990 containerd[1435]: time="2025-09-05T00:11:52.780861765Z" level=info msg="StartContainer for \"05514f8155b2f43963e16f65444b28983a37c5a0d78fda03c8e7e1e591f6cd77\" returns successfully" Sep 5 00:11:52.850896 containerd[1435]: time="2025-09-05T00:11:52.850838354Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:52.851645 containerd[1435]: time="2025-09-05T00:11:52.851605168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:11:52.853396 containerd[1435]: time="2025-09-05T00:11:52.853358800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 250.699748ms" Sep 5 00:11:52.853454 containerd[1435]: time="2025-09-05T00:11:52.853400361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:11:52.854381 containerd[1435]: time="2025-09-05T00:11:52.854090413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:11:52.857118 containerd[1435]: time="2025-09-05T00:11:52.857091068Z" level=info msg="CreateContainer within sandbox \"e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:11:52.867076 containerd[1435]: time="2025-09-05T00:11:52.867033608Z" level=info msg="CreateContainer within sandbox \"e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d283b44ddf6bb1fdcf8bdd3dba014f22ea4f009d9628381f2cf0d217e227fa24\"" Sep 5 00:11:52.868173 containerd[1435]: time="2025-09-05T00:11:52.868138948Z" level=info msg="StartContainer for \"d283b44ddf6bb1fdcf8bdd3dba014f22ea4f009d9628381f2cf0d217e227fa24\"" Sep 5 00:11:52.889430 systemd[1]: Started cri-containerd-d283b44ddf6bb1fdcf8bdd3dba014f22ea4f009d9628381f2cf0d217e227fa24.scope - libcontainer container d283b44ddf6bb1fdcf8bdd3dba014f22ea4f009d9628381f2cf0d217e227fa24. Sep 5 00:11:52.896795 systemd-networkd[1373]: cali1a2fe71483b: Gained IPv6LL Sep 5 00:11:52.920586 containerd[1435]: time="2025-09-05T00:11:52.920393936Z" level=info msg="StartContainer for \"d283b44ddf6bb1fdcf8bdd3dba014f22ea4f009d9628381f2cf0d217e227fa24\" returns successfully" Sep 5 00:11:53.344743 systemd-networkd[1373]: cali873cbfecd3e: Gained IPv6LL Sep 5 00:11:53.503978 kubelet[2479]: E0905 00:11:53.503674 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:53.520321 kubelet[2479]: I0905 00:11:53.518579 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8r5bf" podStartSLOduration=41.518554089 podStartE2EDuration="41.518554089s" podCreationTimestamp="2025-09-05 00:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:53.518456408 +0000 UTC m=+46.281027829" watchObservedRunningTime="2025-09-05 00:11:53.518554089 +0000 UTC m=+46.281125470" Sep 5 00:11:53.533350 kubelet[2479]: E0905 00:11:53.527308 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:53.554097 kubelet[2479]: I0905 00:11:53.553894 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85ffc8f694-8vh85" podStartSLOduration=25.571309003 podStartE2EDuration="27.553877798s" podCreationTimestamp="2025-09-05 00:11:26 +0000 UTC" firstStartedPulling="2025-09-05 00:11:50.871448617 +0000 UTC m=+43.634019998" lastFinishedPulling="2025-09-05 00:11:52.854017372 +0000 UTC m=+45.616588793" observedRunningTime="2025-09-05 00:11:53.553545512 +0000 UTC m=+46.316116933" watchObservedRunningTime="2025-09-05 00:11:53.553877798 +0000 UTC m=+46.316449219" Sep 5 00:11:53.567809 kubelet[2479]: I0905 00:11:53.567722 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85ffc8f694-nf5lk" podStartSLOduration=25.723742677 podStartE2EDuration="27.567707124s" podCreationTimestamp="2025-09-05 00:11:26 +0000 UTC" firstStartedPulling="2025-09-05 00:11:50.757830669 +0000 UTC m=+43.520402090" lastFinishedPulling="2025-09-05 00:11:52.601795156 +0000 UTC m=+45.364366537" observedRunningTime="2025-09-05 00:11:53.567168915 +0000 UTC m=+46.329740336" watchObservedRunningTime="2025-09-05 00:11:53.567707124 +0000 UTC m=+46.330278545" Sep 5 00:11:54.048675 systemd-networkd[1373]: cali6c697833cdb: Gained IPv6LL Sep 5 00:11:54.055069 containerd[1435]: time="2025-09-05T00:11:54.055014139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:54.064183 containerd[1435]: time="2025-09-05T00:11:54.064116818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 00:11:54.078090 containerd[1435]: time="2025-09-05T00:11:54.078049061Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:54.160919 containerd[1435]: time="2025-09-05T00:11:54.160806787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:54.167837 containerd[1435]: time="2025-09-05T00:11:54.167378062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.313212807s" Sep 5 00:11:54.167837 containerd[1435]: time="2025-09-05T00:11:54.167420543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 00:11:54.169083 containerd[1435]: time="2025-09-05T00:11:54.169028691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:11:54.177213 containerd[1435]: time="2025-09-05T00:11:54.177156073Z" level=info msg="CreateContainer within sandbox \"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:11:54.193333 containerd[1435]: time="2025-09-05T00:11:54.193298435Z" level=info msg="CreateContainer within sandbox \"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"574d20f2609b113a03834f7ae0cf74f3859b18b5f09a5a794a1d9a95f5e7da95\"" Sep 5 00:11:54.194338 containerd[1435]: time="2025-09-05T00:11:54.194035608Z" level=info msg="StartContainer for \"574d20f2609b113a03834f7ae0cf74f3859b18b5f09a5a794a1d9a95f5e7da95\"" Sep 5 00:11:54.223434 systemd[1]: Started cri-containerd-574d20f2609b113a03834f7ae0cf74f3859b18b5f09a5a794a1d9a95f5e7da95.scope - libcontainer container 574d20f2609b113a03834f7ae0cf74f3859b18b5f09a5a794a1d9a95f5e7da95. Sep 5 00:11:54.251866 containerd[1435]: time="2025-09-05T00:11:54.251822178Z" level=info msg="StartContainer for \"574d20f2609b113a03834f7ae0cf74f3859b18b5f09a5a794a1d9a95f5e7da95\" returns successfully" Sep 5 00:11:54.312788 containerd[1435]: time="2025-09-05T00:11:54.312679201Z" level=info msg="StopPodSandbox for \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\"" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.365 [INFO][5103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.365 [INFO][5103] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" iface="eth0" netns="/var/run/netns/cni-ef082189-c7b1-afbb-e83e-04325230ca4e" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.366 [INFO][5103] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" iface="eth0" netns="/var/run/netns/cni-ef082189-c7b1-afbb-e83e-04325230ca4e" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.366 [INFO][5103] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" iface="eth0" netns="/var/run/netns/cni-ef082189-c7b1-afbb-e83e-04325230ca4e" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.366 [INFO][5103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.366 [INFO][5103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.385 [INFO][5112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.386 [INFO][5112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.386 [INFO][5112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.395 [WARNING][5112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.395 [INFO][5112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.397 [INFO][5112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:54.400920 containerd[1435]: 2025-09-05 00:11:54.398 [INFO][5103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:11:54.401365 containerd[1435]: time="2025-09-05T00:11:54.401085186Z" level=info msg="TearDown network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\" successfully" Sep 5 00:11:54.401365 containerd[1435]: time="2025-09-05T00:11:54.401112266Z" level=info msg="StopPodSandbox for \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\" returns successfully" Sep 5 00:11:54.401911 containerd[1435]: time="2025-09-05T00:11:54.401882080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447c7486-kqmj4,Uid:b03c02b3-c57d-4b35-90a2-63009120bbea,Namespace:calico-system,Attempt:1,}" Sep 5 00:11:54.498094 systemd[1]: run-containerd-runc-k8s.io-574d20f2609b113a03834f7ae0cf74f3859b18b5f09a5a794a1d9a95f5e7da95-runc.8ABqFK.mount: Deactivated successfully. Sep 5 00:11:54.498237 systemd[1]: run-netns-cni\x2def082189\x2dc7b1\x2dafbb\x2de83e\x2d04325230ca4e.mount: Deactivated successfully. Sep 5 00:11:54.531732 systemd-networkd[1373]: cali50d02e4f17e: Link UP Sep 5 00:11:54.534080 systemd-networkd[1373]: cali50d02e4f17e: Gained carrier Sep 5 00:11:54.540994 kubelet[2479]: I0905 00:11:54.540965 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:54.541324 kubelet[2479]: E0905 00:11:54.541247 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.452 [INFO][5122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0 calico-kube-controllers-5447c7486- calico-system b03c02b3-c57d-4b35-90a2-63009120bbea 1039 0 2025-09-05 00:11:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5447c7486 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5447c7486-kqmj4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali50d02e4f17e [] [] }} ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.452 [INFO][5122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.476 [INFO][5136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" HandleID="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.476 [INFO][5136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" HandleID="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5447c7486-kqmj4", "timestamp":"2025-09-05 00:11:54.476112737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.476 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.476 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.476 [INFO][5136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.489 [INFO][5136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.500 [INFO][5136] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.506 [INFO][5136] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.509 [INFO][5136] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.511 [INFO][5136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.511 [INFO][5136] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.513 [INFO][5136] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.517 [INFO][5136] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.524 [INFO][5136] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.524 [INFO][5136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" host="localhost" Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.524 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:54.555086 containerd[1435]: 2025-09-05 00:11:54.524 [INFO][5136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" HandleID="k8s-pod-network.800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.555647 containerd[1435]: 2025-09-05 00:11:54.526 [INFO][5122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0", GenerateName:"calico-kube-controllers-5447c7486-", Namespace:"calico-system", SelfLink:"", UID:"b03c02b3-c57d-4b35-90a2-63009120bbea", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5447c7486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5447c7486-kqmj4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50d02e4f17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:54.555647 containerd[1435]: 2025-09-05 00:11:54.526 [INFO][5122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.555647 containerd[1435]: 2025-09-05 00:11:54.526 [INFO][5122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50d02e4f17e ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.555647 containerd[1435]: 2025-09-05 00:11:54.532 [INFO][5122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.555647 containerd[1435]: 2025-09-05 00:11:54.533 [INFO][5122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0", GenerateName:"calico-kube-controllers-5447c7486-", Namespace:"calico-system", SelfLink:"", UID:"b03c02b3-c57d-4b35-90a2-63009120bbea", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5447c7486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe", Pod:"calico-kube-controllers-5447c7486-kqmj4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50d02e4f17e", MAC:"ee:09:19:84:e3:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:54.555647 containerd[1435]: 2025-09-05 00:11:54.551 [INFO][5122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe" Namespace="calico-system" Pod="calico-kube-controllers-5447c7486-kqmj4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:11:54.587122 containerd[1435]: time="2025-09-05T00:11:54.586607267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:54.587122 containerd[1435]: time="2025-09-05T00:11:54.587032035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:54.587122 containerd[1435]: time="2025-09-05T00:11:54.587044715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:54.587325 containerd[1435]: time="2025-09-05T00:11:54.587121116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:54.603787 systemd[1]: run-containerd-runc-k8s.io-800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe-runc.M1vt0A.mount: Deactivated successfully. Sep 5 00:11:54.619506 systemd[1]: Started cri-containerd-800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe.scope - libcontainer container 800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe. Sep 5 00:11:54.632683 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:54.653023 containerd[1435]: time="2025-09-05T00:11:54.652981667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447c7486-kqmj4,Uid:b03c02b3-c57d-4b35-90a2-63009120bbea,Namespace:calico-system,Attempt:1,} returns sandbox id \"800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe\"" Sep 5 00:11:55.545683 kubelet[2479]: E0905 00:11:55.545649 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:55.998543 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:35484.service - OpenSSH per-connection server daemon (10.0.0.1:35484). Sep 5 00:11:56.032502 systemd-networkd[1373]: cali50d02e4f17e: Gained IPv6LL Sep 5 00:11:56.045880 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 35484 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:11:56.048124 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:56.054997 systemd-logind[1417]: New session 8 of user core. Sep 5 00:11:56.060977 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:11:56.352931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118868339.mount: Deactivated successfully. Sep 5 00:11:56.408028 sshd[5210]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:56.412347 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:35484.service: Deactivated successfully. Sep 5 00:11:56.414024 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:11:56.416714 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:11:56.419954 systemd-logind[1417]: Removed session 8. Sep 5 00:11:56.833760 containerd[1435]: time="2025-09-05T00:11:56.833705097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:56.834465 containerd[1435]: time="2025-09-05T00:11:56.834422629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 00:11:56.835418 containerd[1435]: time="2025-09-05T00:11:56.835368565Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:56.837836 containerd[1435]: time="2025-09-05T00:11:56.837806686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:56.838552 containerd[1435]: time="2025-09-05T00:11:56.838518218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.669437566s" Sep 5 00:11:56.838552 containerd[1435]: time="2025-09-05T00:11:56.838552139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 00:11:56.840019 containerd[1435]: time="2025-09-05T00:11:56.839984243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:11:56.843802 containerd[1435]: time="2025-09-05T00:11:56.843766907Z" level=info msg="CreateContainer within sandbox \"dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:11:56.857972 containerd[1435]: time="2025-09-05T00:11:56.857866265Z" level=info msg="CreateContainer within sandbox \"dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"83901f69cbd0ab7237e01e3a694ce7647c4dc9b8d970c28205a419e7a54b304b\"" Sep 5 00:11:56.859252 containerd[1435]: time="2025-09-05T00:11:56.858372513Z" level=info msg="StartContainer for \"83901f69cbd0ab7237e01e3a694ce7647c4dc9b8d970c28205a419e7a54b304b\"" Sep 5 00:11:56.889723 systemd[1]: Started cri-containerd-83901f69cbd0ab7237e01e3a694ce7647c4dc9b8d970c28205a419e7a54b304b.scope - libcontainer container 83901f69cbd0ab7237e01e3a694ce7647c4dc9b8d970c28205a419e7a54b304b. Sep 5 00:11:56.964661 containerd[1435]: time="2025-09-05T00:11:56.964614187Z" level=info msg="StartContainer for \"83901f69cbd0ab7237e01e3a694ce7647c4dc9b8d970c28205a419e7a54b304b\" returns successfully" Sep 5 00:11:58.061905 containerd[1435]: time="2025-09-05T00:11:58.061848055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:58.062729 containerd[1435]: time="2025-09-05T00:11:58.062683308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 00:11:58.063523 containerd[1435]: time="2025-09-05T00:11:58.063493602Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:58.066062 containerd[1435]: time="2025-09-05T00:11:58.066021643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:58.066814 containerd[1435]: time="2025-09-05T00:11:58.066779056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.226764372s" Sep 5 00:11:58.066853 containerd[1435]: time="2025-09-05T00:11:58.066816056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 00:11:58.068052 containerd[1435]: time="2025-09-05T00:11:58.068024236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:11:58.072125 containerd[1435]: time="2025-09-05T00:11:58.072092982Z" level=info msg="CreateContainer within sandbox \"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:11:58.089969 containerd[1435]: time="2025-09-05T00:11:58.089920874Z" level=info msg="CreateContainer within sandbox \"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5084ba040e1fe448becfdd424f6d8bcfc406ecdcd76e1c1f5bea8e873af9c453\"" Sep 5 00:11:58.090472 containerd[1435]: time="2025-09-05T00:11:58.090445243Z" level=info msg="StartContainer for \"5084ba040e1fe448becfdd424f6d8bcfc406ecdcd76e1c1f5bea8e873af9c453\"" Sep 5 00:11:58.121481 systemd[1]: Started cri-containerd-5084ba040e1fe448becfdd424f6d8bcfc406ecdcd76e1c1f5bea8e873af9c453.scope - libcontainer container 5084ba040e1fe448becfdd424f6d8bcfc406ecdcd76e1c1f5bea8e873af9c453. Sep 5 00:11:58.146106 containerd[1435]: time="2025-09-05T00:11:58.146071553Z" level=info msg="StartContainer for \"5084ba040e1fe448becfdd424f6d8bcfc406ecdcd76e1c1f5bea8e873af9c453\" returns successfully" Sep 5 00:11:58.395919 kubelet[2479]: I0905 00:11:58.395808 2479 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:11:58.402516 kubelet[2479]: I0905 00:11:58.402403 2479 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:11:58.572223 kubelet[2479]: I0905 00:11:58.572109 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x7qbm" podStartSLOduration=20.440998625 podStartE2EDuration="27.572091006s" podCreationTimestamp="2025-09-05 00:11:31 +0000 UTC" firstStartedPulling="2025-09-05 00:11:50.936721571 +0000 UTC m=+43.699292992" lastFinishedPulling="2025-09-05 00:11:58.067813952 +0000 UTC m=+50.830385373" observedRunningTime="2025-09-05 00:11:58.568926794 +0000 UTC m=+51.331498215" watchObservedRunningTime="2025-09-05 00:11:58.572091006 +0000 UTC m=+51.334662427" Sep 5 00:11:58.572637 kubelet[2479]: I0905 00:11:58.572347 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-x9dsg" podStartSLOduration=23.338525475 podStartE2EDuration="28.57233989s" podCreationTimestamp="2025-09-05 00:11:30 +0000 UTC" firstStartedPulling="2025-09-05 00:11:51.606049346 +0000 UTC m=+44.368620767" lastFinishedPulling="2025-09-05 00:11:56.839863761 +0000 UTC m=+49.602435182" observedRunningTime="2025-09-05 00:11:57.571349399 +0000 UTC m=+50.333920820" watchObservedRunningTime="2025-09-05 00:11:58.57233989 +0000 UTC m=+51.334911311" Sep 5 00:12:00.294749 kubelet[2479]: I0905 00:12:00.294719 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:12:00.740967 containerd[1435]: time="2025-09-05T00:12:00.740849812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:00.741635 containerd[1435]: time="2025-09-05T00:12:00.741607024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 00:12:00.742427 containerd[1435]: time="2025-09-05T00:12:00.742403397Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:00.744960 containerd[1435]: time="2025-09-05T00:12:00.744923317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:12:00.745711 containerd[1435]: time="2025-09-05T00:12:00.745681369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.677623573s" Sep 5 00:12:00.745750 containerd[1435]: time="2025-09-05T00:12:00.745717329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 00:12:00.764827 containerd[1435]: time="2025-09-05T00:12:00.764766552Z" level=info msg="CreateContainer within sandbox \"800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:12:00.776179 containerd[1435]: time="2025-09-05T00:12:00.776135853Z" level=info msg="CreateContainer within sandbox \"800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7913696f1df0ff4c87aa4dc3b365e9990ce3f930cf98542372fe4838c2139e3e\"" Sep 5 00:12:00.778678 containerd[1435]: time="2025-09-05T00:12:00.777479475Z" level=info msg="StartContainer for \"7913696f1df0ff4c87aa4dc3b365e9990ce3f930cf98542372fe4838c2139e3e\"" Sep 5 00:12:00.817470 systemd[1]: Started cri-containerd-7913696f1df0ff4c87aa4dc3b365e9990ce3f930cf98542372fe4838c2139e3e.scope - libcontainer container 7913696f1df0ff4c87aa4dc3b365e9990ce3f930cf98542372fe4838c2139e3e. Sep 5 00:12:00.887998 containerd[1435]: time="2025-09-05T00:12:00.887809150Z" level=info msg="StartContainer for \"7913696f1df0ff4c87aa4dc3b365e9990ce3f930cf98542372fe4838c2139e3e\" returns successfully" Sep 5 00:12:01.417976 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:50970.service - OpenSSH per-connection server daemon (10.0.0.1:50970). Sep 5 00:12:01.465197 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 50970 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:01.467169 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:01.472265 systemd-logind[1417]: New session 9 of user core. Sep 5 00:12:01.481454 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:12:01.593039 kubelet[2479]: I0905 00:12:01.592867 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5447c7486-kqmj4" podStartSLOduration=24.500238108 podStartE2EDuration="30.592851007s" podCreationTimestamp="2025-09-05 00:11:31 +0000 UTC" firstStartedPulling="2025-09-05 00:11:54.654017845 +0000 UTC m=+47.416589266" lastFinishedPulling="2025-09-05 00:12:00.746630744 +0000 UTC m=+53.509202165" observedRunningTime="2025-09-05 00:12:01.592109636 +0000 UTC m=+54.354681057" watchObservedRunningTime="2025-09-05 00:12:01.592851007 +0000 UTC m=+54.355422428" Sep 5 00:12:01.603441 systemd[1]: run-containerd-runc-k8s.io-7913696f1df0ff4c87aa4dc3b365e9990ce3f930cf98542372fe4838c2139e3e-runc.9N8ecR.mount: Deactivated successfully. Sep 5 00:12:01.809711 sshd[5469]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:01.813244 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:50970.service: Deactivated successfully. Sep 5 00:12:01.814998 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:12:01.815599 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:12:01.816344 systemd-logind[1417]: Removed session 9. Sep 5 00:12:02.549383 kubelet[2479]: I0905 00:12:02.549324 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:12:06.820963 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:50984.service - OpenSSH per-connection server daemon (10.0.0.1:50984). Sep 5 00:12:06.854853 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 50984 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:06.856324 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:06.860693 systemd-logind[1417]: New session 10 of user core. Sep 5 00:12:06.878501 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:12:07.088547 sshd[5517]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:07.098921 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:50984.service: Deactivated successfully. Sep 5 00:12:07.100459 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:12:07.102456 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:12:07.109211 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:51000.service - OpenSSH per-connection server daemon (10.0.0.1:51000). Sep 5 00:12:07.110217 systemd-logind[1417]: Removed session 10. Sep 5 00:12:07.146036 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 51000 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:07.147615 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:07.155596 systemd-logind[1417]: New session 11 of user core. Sep 5 00:12:07.165490 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:12:07.321306 containerd[1435]: time="2025-09-05T00:12:07.319788812Z" level=info msg="StopPodSandbox for \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\"" Sep 5 00:12:07.401246 sshd[5532]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:07.410176 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:51006.service - OpenSSH per-connection server daemon (10.0.0.1:51006). Sep 5 00:12:07.414039 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:12:07.414212 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:51000.service: Deactivated successfully. Sep 5 00:12:07.417263 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:12:07.424452 systemd-logind[1417]: Removed session 11. Sep 5 00:12:07.473177 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 51006 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:07.475517 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:07.479422 systemd-logind[1417]: New session 12 of user core. Sep 5 00:12:07.492498 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.391 [WARNING][5554] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a23f84-da38-460c-9db8-35e6a608cf99", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b", Pod:"calico-apiserver-85ffc8f694-nf5lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica7e277e422", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.391 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.391 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" iface="eth0" netns="" Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.391 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.391 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.435 [INFO][5562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.435 [INFO][5562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.435 [INFO][5562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.474 [WARNING][5562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.474 [INFO][5562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.493 [INFO][5562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:07.499600 containerd[1435]: 2025-09-05 00:12:07.496 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.500359 containerd[1435]: time="2025-09-05T00:12:07.499617418Z" level=info msg="TearDown network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\" successfully" Sep 5 00:12:07.500359 containerd[1435]: time="2025-09-05T00:12:07.499639698Z" level=info msg="StopPodSandbox for \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\" returns successfully" Sep 5 00:12:07.500359 containerd[1435]: time="2025-09-05T00:12:07.500186226Z" level=info msg="RemovePodSandbox for \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\"" Sep 5 00:12:07.500359 containerd[1435]: time="2025-09-05T00:12:07.500221747Z" level=info msg="Forcibly stopping sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\"" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.545 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"06a23f84-da38-460c-9db8-35e6a608cf99", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e3620f1ff1bcbbff3ca9c2a75d5edc83283265d8976201bb56cba36b4296f3b", Pod:"calico-apiserver-85ffc8f694-nf5lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica7e277e422", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.545 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.545 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" iface="eth0" netns="" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.546 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.546 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.572 [INFO][5602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.572 [INFO][5602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.572 [INFO][5602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.584 [WARNING][5602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.584 [INFO][5602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" HandleID="k8s-pod-network.b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Workload="localhost-k8s-calico--apiserver--85ffc8f694--nf5lk-eth0" Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.587 [INFO][5602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:07.592308 containerd[1435]: 2025-09-05 00:12:07.590 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6" Sep 5 00:12:07.592711 containerd[1435]: time="2025-09-05T00:12:07.592349262Z" level=info msg="TearDown network for sandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\" successfully" Sep 5 00:12:07.618040 containerd[1435]: time="2025-09-05T00:12:07.617255549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:07.618040 containerd[1435]: time="2025-09-05T00:12:07.617419151Z" level=info msg="RemovePodSandbox \"b39d5085b700bf0aba00c44df401822825ba2f1adc9b2fa965c3657c3b35c4b6\" returns successfully" Sep 5 00:12:07.619156 containerd[1435]: time="2025-09-05T00:12:07.618377845Z" level=info msg="StopPodSandbox for \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\"" Sep 5 00:12:07.670469 sshd[5568]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:07.677146 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:12:07.677526 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:51006.service: Deactivated successfully. Sep 5 00:12:07.679955 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:12:07.682544 systemd-logind[1417]: Removed session 12. Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.658 [WARNING][5620] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0", GenerateName:"calico-kube-controllers-5447c7486-", Namespace:"calico-system", SelfLink:"", UID:"b03c02b3-c57d-4b35-90a2-63009120bbea", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5447c7486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe", Pod:"calico-kube-controllers-5447c7486-kqmj4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50d02e4f17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.658 [INFO][5620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.658 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" iface="eth0" netns="" Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.658 [INFO][5620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.658 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.684 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.684 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.684 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.695 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.695 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.697 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:07.701482 containerd[1435]: 2025-09-05 00:12:07.699 [INFO][5620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.701875 containerd[1435]: time="2025-09-05T00:12:07.701518589Z" level=info msg="TearDown network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\" successfully" Sep 5 00:12:07.701875 containerd[1435]: time="2025-09-05T00:12:07.701543629Z" level=info msg="StopPodSandbox for \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\" returns successfully" Sep 5 00:12:07.702336 containerd[1435]: time="2025-09-05T00:12:07.702081037Z" level=info msg="RemovePodSandbox for \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\"" Sep 5 00:12:07.702336 containerd[1435]: time="2025-09-05T00:12:07.702119318Z" level=info msg="Forcibly stopping sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\"" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.738 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0", GenerateName:"calico-kube-controllers-5447c7486-", Namespace:"calico-system", SelfLink:"", UID:"b03c02b3-c57d-4b35-90a2-63009120bbea", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5447c7486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"800f61ea8f4462962cbb4cef951d9def179a1a265ec045a397372dcc2dd7aabe", Pod:"calico-kube-controllers-5447c7486-kqmj4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50d02e4f17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.738 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.738 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" iface="eth0" netns="" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.738 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.738 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.756 [INFO][5659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.757 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.757 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.770 [WARNING][5659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.770 [INFO][5659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" HandleID="k8s-pod-network.6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Workload="localhost-k8s-calico--kube--controllers--5447c7486--kqmj4-eth0" Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.771 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:07.776218 containerd[1435]: 2025-09-05 00:12:07.774 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06" Sep 5 00:12:07.776619 containerd[1435]: time="2025-09-05T00:12:07.776264288Z" level=info msg="TearDown network for sandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\" successfully" Sep 5 00:12:07.781374 containerd[1435]: time="2025-09-05T00:12:07.781336323Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:07.781443 containerd[1435]: time="2025-09-05T00:12:07.781410444Z" level=info msg="RemovePodSandbox \"6e698c37ff241d871d8ba762231db8625c1443a5aad1becf4e0ab6d4b2394c06\" returns successfully" Sep 5 00:12:07.781916 containerd[1435]: time="2025-09-05T00:12:07.781888371Z" level=info msg="StopPodSandbox for \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\"" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.818 [WARNING][5677] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x7qbm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c", Pod:"csi-node-driver-x7qbm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a2fe71483b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.818 [INFO][5677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.819 [INFO][5677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" iface="eth0" netns="" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.819 [INFO][5677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.819 [INFO][5677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.862 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.863 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.863 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.871 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.871 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.873 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:07.876546 containerd[1435]: 2025-09-05 00:12:07.875 [INFO][5677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.877166 containerd[1435]: time="2025-09-05T00:12:07.876589045Z" level=info msg="TearDown network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\" successfully" Sep 5 00:12:07.877166 containerd[1435]: time="2025-09-05T00:12:07.876614565Z" level=info msg="StopPodSandbox for \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\" returns successfully" Sep 5 00:12:07.877166 containerd[1435]: time="2025-09-05T00:12:07.877049091Z" level=info msg="RemovePodSandbox for \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\"" Sep 5 00:12:07.877166 containerd[1435]: time="2025-09-05T00:12:07.877080812Z" level=info msg="Forcibly stopping sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\"" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.910 [WARNING][5702] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x7qbm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9349fa40-4aa1-44d3-a3a7-8c6748ecbd04", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d41335ea70b40ac8b5f4aed50c7fcb057354f8251f84e51efe2982f3be6cb26c", Pod:"csi-node-driver-x7qbm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a2fe71483b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.910 [INFO][5702] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.910 [INFO][5702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" iface="eth0" netns="" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.910 [INFO][5702] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.910 [INFO][5702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.928 [INFO][5711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.928 [INFO][5711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.928 [INFO][5711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.937 [WARNING][5711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.937 [INFO][5711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" HandleID="k8s-pod-network.01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Workload="localhost-k8s-csi--node--driver--x7qbm-eth0" Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.943 [INFO][5711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:07.947462 containerd[1435]: 2025-09-05 00:12:07.945 [INFO][5702] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08" Sep 5 00:12:07.947462 containerd[1435]: time="2025-09-05T00:12:07.947441927Z" level=info msg="TearDown network for sandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\" successfully" Sep 5 00:12:07.974597 containerd[1435]: time="2025-09-05T00:12:07.974550366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:07.974756 containerd[1435]: time="2025-09-05T00:12:07.974640847Z" level=info msg="RemovePodSandbox \"01de5cccfe9e34b4c16734dcf1580e52ad5664ef67628cd48dcd6d2847393d08\" returns successfully" Sep 5 00:12:07.975172 containerd[1435]: time="2025-09-05T00:12:07.975151135Z" level=info msg="StopPodSandbox for \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\"" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.008 [WARNING][5728] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82", Pod:"coredns-674b8bbfcf-r9lrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ea63c837ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.009 [INFO][5728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.009 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" iface="eth0" netns="" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.009 [INFO][5728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.009 [INFO][5728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.026 [INFO][5737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.026 [INFO][5737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.027 [INFO][5737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.036 [WARNING][5737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.036 [INFO][5737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.038 [INFO][5737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.043913 containerd[1435]: 2025-09-05 00:12:08.041 [INFO][5728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.044392 containerd[1435]: time="2025-09-05T00:12:08.043965502Z" level=info msg="TearDown network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\" successfully" Sep 5 00:12:08.044392 containerd[1435]: time="2025-09-05T00:12:08.043991062Z" level=info msg="StopPodSandbox for \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\" returns successfully" Sep 5 00:12:08.044875 containerd[1435]: time="2025-09-05T00:12:08.044587391Z" level=info msg="RemovePodSandbox for \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\"" Sep 5 00:12:08.044875 containerd[1435]: time="2025-09-05T00:12:08.044621671Z" level=info msg="Forcibly stopping sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\"" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.077 [WARNING][5755] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc10bbf2-cfa3-42d8-98e2-b1172ebe9c0e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c7f64a0af5404b274f2953f3c9d247953aaf987563f1b57d158e091261f8d82", Pod:"coredns-674b8bbfcf-r9lrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ea63c837ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.077 [INFO][5755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.077 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" iface="eth0" netns="" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.077 [INFO][5755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.077 [INFO][5755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.097 [INFO][5764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.097 [INFO][5764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.097 [INFO][5764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.106 [WARNING][5764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.106 [INFO][5764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" HandleID="k8s-pod-network.2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Workload="localhost-k8s-coredns--674b8bbfcf--r9lrh-eth0" Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.107 [INFO][5764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.112360 containerd[1435]: 2025-09-05 00:12:08.110 [INFO][5755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b" Sep 5 00:12:08.113170 containerd[1435]: time="2025-09-05T00:12:08.112837986Z" level=info msg="TearDown network for sandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\" successfully" Sep 5 00:12:08.116556 containerd[1435]: time="2025-09-05T00:12:08.116374878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:08.116556 containerd[1435]: time="2025-09-05T00:12:08.116461559Z" level=info msg="RemovePodSandbox \"2e7c8ac824b63cd9fc9beb3884300c57dda0c83bb2a5e6214d659a6b847cc55b\" returns successfully" Sep 5 00:12:08.116990 containerd[1435]: time="2025-09-05T00:12:08.116941806Z" level=info msg="StopPodSandbox for \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\"" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.151 [WARNING][5782] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea063706-54f6-49f3-9a40-c61fd044db8b", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73", Pod:"coredns-674b8bbfcf-8r5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c697833cdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.151 [INFO][5782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.151 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" iface="eth0" netns="" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.151 [INFO][5782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.151 [INFO][5782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.170 [INFO][5790] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.170 [INFO][5790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.170 [INFO][5790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.179 [WARNING][5790] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.179 [INFO][5790] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.181 [INFO][5790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.185275 containerd[1435]: 2025-09-05 00:12:08.183 [INFO][5782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.185275 containerd[1435]: time="2025-09-05T00:12:08.185146280Z" level=info msg="TearDown network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\" successfully" Sep 5 00:12:08.185275 containerd[1435]: time="2025-09-05T00:12:08.185171041Z" level=info msg="StopPodSandbox for \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\" returns successfully" Sep 5 00:12:08.186161 containerd[1435]: time="2025-09-05T00:12:08.186128135Z" level=info msg="RemovePodSandbox for \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\"" Sep 5 00:12:08.186210 containerd[1435]: time="2025-09-05T00:12:08.186162175Z" level=info msg="Forcibly stopping sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\"" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.220 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea063706-54f6-49f3-9a40-c61fd044db8b", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa4bbc829e8ab55175d4ae37d20c0a6be94e857560d2f69ee0b5d58d4266da73", Pod:"coredns-674b8bbfcf-8r5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c697833cdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.220 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.220 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" iface="eth0" netns="" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.220 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.220 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.239 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.239 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.239 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.248 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.248 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" HandleID="k8s-pod-network.b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Workload="localhost-k8s-coredns--674b8bbfcf--8r5bf-eth0" Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.249 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.253362 containerd[1435]: 2025-09-05 00:12:08.251 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c" Sep 5 00:12:08.253362 containerd[1435]: time="2025-09-05T00:12:08.253348915Z" level=info msg="TearDown network for sandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\" successfully" Sep 5 00:12:08.257089 containerd[1435]: time="2025-09-05T00:12:08.257044089Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:08.257147 containerd[1435]: time="2025-09-05T00:12:08.257122530Z" level=info msg="RemovePodSandbox \"b8b5d6483936e446b393dc23c10c872d6197e9e5019a7cec20ea331be6a0452c\" returns successfully" Sep 5 00:12:08.257779 containerd[1435]: time="2025-09-05T00:12:08.257749379Z" level=info msg="StopPodSandbox for \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\"" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.293 [WARNING][5837] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x9dsg-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5167327f-f042-4e3c-b7e0-0cc3388932ec", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b", Pod:"goldmane-54d579b49d-x9dsg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali873cbfecd3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.293 [INFO][5837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.293 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" iface="eth0" netns="" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.293 [INFO][5837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.293 [INFO][5837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.312 [INFO][5846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.312 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.312 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.323 [WARNING][5846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.323 [INFO][5846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.325 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.328835 containerd[1435]: 2025-09-05 00:12:08.327 [INFO][5837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.330409 containerd[1435]: time="2025-09-05T00:12:08.328878296Z" level=info msg="TearDown network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\" successfully" Sep 5 00:12:08.330409 containerd[1435]: time="2025-09-05T00:12:08.328904297Z" level=info msg="StopPodSandbox for \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\" returns successfully" Sep 5 00:12:08.330409 containerd[1435]: time="2025-09-05T00:12:08.329362823Z" level=info msg="RemovePodSandbox for \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\"" Sep 5 00:12:08.330409 containerd[1435]: time="2025-09-05T00:12:08.329393504Z" level=info msg="Forcibly stopping sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\"" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.378 [WARNING][5864] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x9dsg-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5167327f-f042-4e3c-b7e0-0cc3388932ec", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dce39b2dfbec2775c725655cbe4075a1b5fc0707a5eea1a9c2d571bf97c7160b", Pod:"goldmane-54d579b49d-x9dsg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali873cbfecd3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.378 [INFO][5864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.378 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" iface="eth0" netns="" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.378 [INFO][5864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.378 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.404 [INFO][5875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.404 [INFO][5875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.404 [INFO][5875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.415 [WARNING][5875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.415 [INFO][5875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" HandleID="k8s-pod-network.7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Workload="localhost-k8s-goldmane--54d579b49d--x9dsg-eth0" Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.417 [INFO][5875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.421760 containerd[1435]: 2025-09-05 00:12:08.419 [INFO][5864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b" Sep 5 00:12:08.422145 containerd[1435]: time="2025-09-05T00:12:08.421799211Z" level=info msg="TearDown network for sandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\" successfully" Sep 5 00:12:08.428100 containerd[1435]: time="2025-09-05T00:12:08.427860980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:08.428100 containerd[1435]: time="2025-09-05T00:12:08.427932421Z" level=info msg="RemovePodSandbox \"7c1a5348f056d90cb8997198475c6ecc27217fa1f2f6ff712b037b18e10a528b\" returns successfully" Sep 5 00:12:08.428657 containerd[1435]: time="2025-09-05T00:12:08.428397588Z" level=info msg="StopPodSandbox for \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\"" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.464 [WARNING][5892] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d7d624-94b8-4c95-b7e4-d89b51613009", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5", Pod:"calico-apiserver-85ffc8f694-8vh85", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f6e1cdbae3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.464 [INFO][5892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.464 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" iface="eth0" netns="" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.464 [INFO][5892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.464 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.485 [INFO][5900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.485 [INFO][5900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.485 [INFO][5900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.494 [WARNING][5900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.494 [INFO][5900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.499 [INFO][5900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.502923 containerd[1435]: 2025-09-05 00:12:08.500 [INFO][5892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.502923 containerd[1435]: time="2025-09-05T00:12:08.502563789Z" level=info msg="TearDown network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\" successfully" Sep 5 00:12:08.502923 containerd[1435]: time="2025-09-05T00:12:08.502588390Z" level=info msg="StopPodSandbox for \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\" returns successfully" Sep 5 00:12:08.503411 containerd[1435]: time="2025-09-05T00:12:08.503060236Z" level=info msg="RemovePodSandbox for \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\"" Sep 5 00:12:08.503411 containerd[1435]: time="2025-09-05T00:12:08.503099797Z" level=info msg="Forcibly stopping sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\"" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.540 [WARNING][5916] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0", GenerateName:"calico-apiserver-85ffc8f694-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d7d624-94b8-4c95-b7e4-d89b51613009", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ffc8f694", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1db15e8f21619849e0d8345b60e2efc983299811992faff85852ff4e7a5a8e5", Pod:"calico-apiserver-85ffc8f694-8vh85", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f6e1cdbae3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.540 [INFO][5916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.540 [INFO][5916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" iface="eth0" netns="" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.540 [INFO][5916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.540 [INFO][5916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.559 [INFO][5924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.559 [INFO][5924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.559 [INFO][5924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.568 [WARNING][5924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.568 [INFO][5924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" HandleID="k8s-pod-network.16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Workload="localhost-k8s-calico--apiserver--85ffc8f694--8vh85-eth0" Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.571 [INFO][5924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.575989 containerd[1435]: 2025-09-05 00:12:08.573 [INFO][5916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c" Sep 5 00:12:08.575989 containerd[1435]: time="2025-09-05T00:12:08.575404091Z" level=info msg="TearDown network for sandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\" successfully" Sep 5 00:12:08.579504 containerd[1435]: time="2025-09-05T00:12:08.579467831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:08.579594 containerd[1435]: time="2025-09-05T00:12:08.579541472Z" level=info msg="RemovePodSandbox \"16d24ebcc2be66ad3176473ccfe5607fdc0d59b4c0946935ecb83763c339ce4c\" returns successfully" Sep 5 00:12:08.580223 containerd[1435]: time="2025-09-05T00:12:08.580180841Z" level=info msg="StopPodSandbox for \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\"" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.638 [WARNING][5942] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" WorkloadEndpoint="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.638 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.638 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" iface="eth0" netns="" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.638 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.638 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.658 [INFO][5952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.658 [INFO][5952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.659 [INFO][5952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.667 [WARNING][5952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.667 [INFO][5952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.668 [INFO][5952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.671997 containerd[1435]: 2025-09-05 00:12:08.670 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.673429 containerd[1435]: time="2025-09-05T00:12:08.672377545Z" level=info msg="TearDown network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\" successfully" Sep 5 00:12:08.673429 containerd[1435]: time="2025-09-05T00:12:08.672407666Z" level=info msg="StopPodSandbox for \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\" returns successfully" Sep 5 00:12:08.673429 containerd[1435]: time="2025-09-05T00:12:08.673214718Z" level=info msg="RemovePodSandbox for \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\"" Sep 5 00:12:08.673429 containerd[1435]: time="2025-09-05T00:12:08.673240238Z" level=info msg="Forcibly stopping sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\"" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.705 [WARNING][5969] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" WorkloadEndpoint="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.705 [INFO][5969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.705 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" iface="eth0" netns="" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.705 [INFO][5969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.705 [INFO][5969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.723 [INFO][5978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.723 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.723 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.733 [WARNING][5978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.733 [INFO][5978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" HandleID="k8s-pod-network.df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Workload="localhost-k8s-whisker--dcc88dcbb--zdnrb-eth0" Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.736 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:12:08.740322 containerd[1435]: 2025-09-05 00:12:08.738 [INFO][5969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2" Sep 5 00:12:08.740322 containerd[1435]: time="2025-09-05T00:12:08.739870610Z" level=info msg="TearDown network for sandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\" successfully" Sep 5 00:12:08.742970 containerd[1435]: time="2025-09-05T00:12:08.742906374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:12:08.743022 containerd[1435]: time="2025-09-05T00:12:08.743008495Z" level=info msg="RemovePodSandbox \"df5613c955411de098e0c4ad2c235c4d692089903caf0622ab23faa8a9e831b2\" returns successfully" Sep 5 00:12:12.684203 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). Sep 5 00:12:12.718417 sshd[5987]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:12.719975 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:12.725231 systemd-logind[1417]: New session 13 of user core. Sep 5 00:12:12.738887 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:12:12.922479 sshd[5987]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:12.928910 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:60344.service: Deactivated successfully. Sep 5 00:12:12.931229 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:12:12.933258 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:12:12.941633 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:60346.service - OpenSSH per-connection server daemon (10.0.0.1:60346). Sep 5 00:12:12.944076 systemd-logind[1417]: Removed session 13. Sep 5 00:12:12.982423 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 60346 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:12.984073 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:12.988321 systemd-logind[1417]: New session 14 of user core. Sep 5 00:12:12.998449 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:12:13.214529 sshd[6001]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:13.223976 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:60346.service: Deactivated successfully. Sep 5 00:12:13.226072 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:12:13.227367 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:12:13.234559 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:60354.service - OpenSSH per-connection server daemon (10.0.0.1:60354). Sep 5 00:12:13.235935 systemd-logind[1417]: Removed session 14. Sep 5 00:12:13.265348 sshd[6016]: Accepted publickey for core from 10.0.0.1 port 60354 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:13.266634 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:13.270255 systemd-logind[1417]: New session 15 of user core. Sep 5 00:12:13.278450 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:12:13.870329 sshd[6016]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:13.878677 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:60354.service: Deactivated successfully. Sep 5 00:12:13.881089 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:12:13.883401 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:12:13.890770 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:60368.service - OpenSSH per-connection server daemon (10.0.0.1:60368). Sep 5 00:12:13.895313 systemd-logind[1417]: Removed session 15. Sep 5 00:12:13.927908 sshd[6038]: Accepted publickey for core from 10.0.0.1 port 60368 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:13.929303 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:13.932938 systemd-logind[1417]: New session 16 of user core. Sep 5 00:12:13.939443 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:12:14.395490 sshd[6038]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:14.405946 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:60368.service: Deactivated successfully. Sep 5 00:12:14.408211 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:12:14.410056 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:12:14.417594 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:60380.service - OpenSSH per-connection server daemon (10.0.0.1:60380). Sep 5 00:12:14.419392 systemd-logind[1417]: Removed session 16. Sep 5 00:12:14.448568 sshd[6050]: Accepted publickey for core from 10.0.0.1 port 60380 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:14.450006 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:14.453501 systemd-logind[1417]: New session 17 of user core. Sep 5 00:12:14.467485 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:12:14.591368 sshd[6050]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:14.594570 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:60380.service: Deactivated successfully. Sep 5 00:12:14.596274 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:12:14.596983 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:12:14.598082 systemd-logind[1417]: Removed session 17. Sep 5 00:12:19.606452 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:60394.service - OpenSSH per-connection server daemon (10.0.0.1:60394). Sep 5 00:12:19.641679 sshd[6066]: Accepted publickey for core from 10.0.0.1 port 60394 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:19.643319 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:19.647820 systemd-logind[1417]: New session 18 of user core. Sep 5 00:12:19.654457 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:12:19.778425 sshd[6066]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:19.781675 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:60394.service: Deactivated successfully. Sep 5 00:12:19.783450 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:12:19.785087 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:12:19.786371 systemd-logind[1417]: Removed session 18. Sep 5 00:12:21.315616 kubelet[2479]: E0905 00:12:21.313785 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:12:24.787816 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:49330.service - OpenSSH per-connection server daemon (10.0.0.1:49330). Sep 5 00:12:24.824110 sshd[6091]: Accepted publickey for core from 10.0.0.1 port 49330 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:24.825333 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:24.829000 systemd-logind[1417]: New session 19 of user core. Sep 5 00:12:24.838415 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:12:24.952133 sshd[6091]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:24.955264 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:49330.service: Deactivated successfully. Sep 5 00:12:24.956894 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:12:24.959938 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:12:24.960702 systemd-logind[1417]: Removed session 19. Sep 5 00:12:29.965209 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:38456.service - OpenSSH per-connection server daemon (10.0.0.1:38456). Sep 5 00:12:30.000191 sshd[6148]: Accepted publickey for core from 10.0.0.1 port 38456 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:12:30.001848 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:12:30.007661 systemd-logind[1417]: New session 20 of user core. Sep 5 00:12:30.021480 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:12:30.197607 sshd[6148]: pam_unix(sshd:session): session closed for user core Sep 5 00:12:30.201340 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:12:30.201716 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:38456.service: Deactivated successfully. Sep 5 00:12:30.204861 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:12:30.205837 systemd-logind[1417]: Removed session 20.