Sep 12 17:38:48.825159 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:38:48.825180 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:38:48.825190 kernel: KASLR enabled Sep 12 17:38:48.825196 kernel: efi: EFI v2.7 by EDK II Sep 12 17:38:48.825202 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 12 17:38:48.825208 kernel: random: crng init done Sep 12 17:38:48.825215 kernel: ACPI: Early table checksum verification disabled Sep 12 17:38:48.825222 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 12 17:38:48.825228 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:38:48.825236 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825243 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825249 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825255 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825262 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825270 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825278 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825285 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825292 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:48.825298 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:38:48.825305 kernel: NUMA: Failed to initialise from firmware Sep 12 17:38:48.825312 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:38:48.825319 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Sep 12 17:38:48.825326 kernel: Zone ranges: Sep 12 17:38:48.825332 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:38:48.825339 kernel: DMA32 empty Sep 12 17:38:48.825347 kernel: Normal empty Sep 12 17:38:48.825353 kernel: Movable zone start for each node Sep 12 17:38:48.825360 kernel: Early memory node ranges Sep 12 17:38:48.825367 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 12 17:38:48.825374 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 12 17:38:48.825380 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 12 17:38:48.825387 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 17:38:48.825394 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 17:38:48.825401 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 17:38:48.825407 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:38:48.825414 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:38:48.825421 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:38:48.825428 kernel: psci: probing for conduit method from ACPI. Sep 12 17:38:48.825435 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:38:48.825442 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:38:48.825451 kernel: psci: Trusted OS migration not required Sep 12 17:38:48.825458 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:38:48.825466 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:38:48.825474 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:38:48.825481 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:38:48.825489 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:38:48.825496 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:38:48.825503 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:38:48.825510 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:38:48.825518 kernel: CPU features: detected: Spectre-v4 Sep 12 17:38:48.825525 kernel: CPU features: detected: Spectre-BHB Sep 12 17:38:48.825532 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:38:48.825558 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:38:48.825567 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:38:48.825575 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:38:48.825582 kernel: alternatives: applying boot alternatives Sep 12 17:38:48.825594 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:38:48.825602 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:38:48.825609 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:38:48.825616 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:38:48.825624 kernel: Fallback order for Node 0: 0 Sep 12 17:38:48.825644 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 12 17:38:48.825651 kernel: Policy zone: DMA Sep 12 17:38:48.825659 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:38:48.825668 kernel: software IO TLB: area num 4. Sep 12 17:38:48.825675 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 12 17:38:48.825683 kernel: Memory: 2386348K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185940K reserved, 0K cma-reserved) Sep 12 17:38:48.825690 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:38:48.825697 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:38:48.825705 kernel: rcu: RCU event tracing is enabled. Sep 12 17:38:48.825715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:38:48.825724 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:38:48.825733 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:38:48.825743 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:38:48.825752 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:38:48.825760 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:38:48.825768 kernel: GICv3: 256 SPIs implemented Sep 12 17:38:48.825775 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:38:48.825782 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:38:48.825789 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:38:48.825796 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:38:48.825803 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:38:48.825811 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:38:48.825818 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:38:48.825825 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 12 17:38:48.825832 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 12 17:38:48.825839 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:38:48.825848 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:38:48.825855 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:38:48.825862 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:38:48.825870 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:38:48.825884 kernel: arm-pv: using stolen time PV Sep 12 17:38:48.825892 kernel: Console: colour dummy device 80x25 Sep 12 17:38:48.825899 kernel: ACPI: Core revision 20230628 Sep 12 17:38:48.825907 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:38:48.825914 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:38:48.825922 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:38:48.825930 kernel: landlock: Up and running. Sep 12 17:38:48.825937 kernel: SELinux: Initializing. Sep 12 17:38:48.825945 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:38:48.825952 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:38:48.825960 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:38:48.825967 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:38:48.825975 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:38:48.825982 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:38:48.825989 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 17:38:48.825998 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 17:38:48.826005 kernel: Remapping and enabling EFI services. Sep 12 17:38:48.826013 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:38:48.826020 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:38:48.826027 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:38:48.826035 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 12 17:38:48.826042 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:38:48.826050 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:38:48.826057 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:38:48.826064 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:38:48.826086 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 12 17:38:48.826094 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:38:48.826112 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:38:48.826121 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:38:48.826129 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:38:48.826137 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 12 17:38:48.826144 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:38:48.826152 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:38:48.826160 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:38:48.826169 kernel: SMP: Total of 4 processors activated. Sep 12 17:38:48.826177 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:38:48.826185 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:38:48.826193 kernel: CPU features: detected: Common not Private translations Sep 12 17:38:48.826200 kernel: CPU features: detected: CRC32 instructions Sep 12 17:38:48.826208 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:38:48.826216 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:38:48.826224 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:38:48.826233 kernel: CPU features: detected: Privileged Access Never Sep 12 17:38:48.826240 kernel: CPU features: detected: RAS Extension Support Sep 12 17:38:48.826248 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:38:48.826256 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:38:48.826263 kernel: alternatives: applying system-wide alternatives Sep 12 17:38:48.826271 kernel: devtmpfs: initialized Sep 12 17:38:48.826279 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:38:48.826287 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:38:48.826294 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:38:48.826303 kernel: SMBIOS 3.0.0 present. Sep 12 17:38:48.826311 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 12 17:38:48.826319 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:38:48.826327 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:38:48.826335 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:38:48.826343 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:38:48.826350 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:38:48.826358 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 12 17:38:48.826367 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:38:48.826375 kernel: cpuidle: using governor menu Sep 12 17:38:48.826383 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:38:48.826390 kernel: ASID allocator initialised with 32768 entries Sep 12 17:38:48.826398 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:38:48.826406 kernel: Serial: AMBA PL011 UART driver Sep 12 17:38:48.826414 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:38:48.826421 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:38:48.826429 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:38:48.826437 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:38:48.826446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:38:48.826454 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:38:48.826461 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:38:48.826469 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:38:48.826477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:38:48.826485 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:38:48.826492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:38:48.826500 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:38:48.826508 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:38:48.826517 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:38:48.826525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:38:48.826532 kernel: ACPI: Interpreter enabled Sep 12 17:38:48.826560 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:38:48.826568 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:38:48.826576 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:38:48.826584 kernel: printk: console [ttyAMA0] enabled Sep 12 17:38:48.826591 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:38:48.826714 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:38:48.826790 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:38:48.826859 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:38:48.826927 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:38:48.826993 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:38:48.827003 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:38:48.827011 kernel: PCI host bridge to bus 0000:00 Sep 12 17:38:48.827082 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:38:48.827155 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:38:48.827216 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:38:48.827276 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:38:48.827361 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 17:38:48.827440 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:38:48.827510 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 12 17:38:48.827688 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 12 17:38:48.827762 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:38:48.827831 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:38:48.827898 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 12 17:38:48.827968 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 12 17:38:48.828031 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:38:48.828092 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:38:48.828168 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:38:48.828179 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:38:48.828187 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:38:48.828195 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:38:48.828203 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:38:48.828211 kernel: iommu: Default domain type: Translated Sep 12 17:38:48.828219 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:38:48.828227 kernel: efivars: Registered efivars operations Sep 12 17:38:48.828237 kernel: vgaarb: loaded Sep 12 17:38:48.828245 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:38:48.828253 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:38:48.828261 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:38:48.828268 kernel: pnp: PnP ACPI init Sep 12 17:38:48.828342 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:38:48.828354 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:38:48.828361 kernel: NET: Registered PF_INET protocol family Sep 12 17:38:48.828369 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:38:48.828379 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:38:48.828387 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:38:48.828395 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:38:48.828403 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:38:48.828411 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:38:48.828419 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:38:48.828427 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:38:48.828435 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:38:48.828445 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:38:48.828452 kernel: kvm [1]: HYP mode not available Sep 12 17:38:48.828471 kernel: Initialise system trusted keyrings Sep 12 17:38:48.828479 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:38:48.828487 kernel: Key type asymmetric registered Sep 12 17:38:48.828496 kernel: Asymmetric key parser 'x509' registered Sep 12 17:38:48.828504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:38:48.828511 kernel: io scheduler mq-deadline registered Sep 12 17:38:48.828519 kernel: io scheduler kyber registered Sep 12 17:38:48.828527 kernel: io scheduler bfq registered Sep 12 17:38:48.828545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:38:48.828556 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:38:48.828564 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:38:48.828659 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:38:48.828670 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:38:48.828678 kernel: thunder_xcv, ver 1.0 Sep 12 17:38:48.828685 kernel: thunder_bgx, ver 1.0 Sep 12 17:38:48.828693 kernel: nicpf, ver 1.0 Sep 12 17:38:48.828701 kernel: nicvf, ver 1.0 Sep 12 17:38:48.828783 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:38:48.828852 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:38:48 UTC (1757698728) Sep 12 17:38:48.828862 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:38:48.828870 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 17:38:48.828878 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:38:48.828885 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:38:48.828893 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:38:48.828901 kernel: Segment Routing with IPv6 Sep 12 17:38:48.828911 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:38:48.828918 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:38:48.828926 kernel: Key type dns_resolver registered Sep 12 17:38:48.828934 kernel: registered taskstats version 1 Sep 12 17:38:48.828941 kernel: Loading compiled-in X.509 certificates Sep 12 17:38:48.828949 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:38:48.828957 kernel: Key type .fscrypt registered Sep 12 17:38:48.828964 kernel: Key type fscrypt-provisioning registered Sep 12 17:38:48.828972 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:38:48.828981 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:38:48.828989 kernel: ima: No architecture policies found Sep 12 17:38:48.828996 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:38:48.829004 kernel: clk: Disabling unused clocks Sep 12 17:38:48.829012 kernel: Freeing unused kernel memory: 39488K Sep 12 17:38:48.829019 kernel: Run /init as init process Sep 12 17:38:48.829027 kernel: with arguments: Sep 12 17:38:48.829035 kernel: /init Sep 12 17:38:48.829042 kernel: with environment: Sep 12 17:38:48.829051 kernel: HOME=/ Sep 12 17:38:48.829059 kernel: TERM=linux Sep 12 17:38:48.829066 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:38:48.829076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:38:48.829086 systemd[1]: Detected virtualization kvm. Sep 12 17:38:48.829094 systemd[1]: Detected architecture arm64. Sep 12 17:38:48.829108 systemd[1]: Running in initrd. Sep 12 17:38:48.829118 systemd[1]: No hostname configured, using default hostname. Sep 12 17:38:48.829126 systemd[1]: Hostname set to . Sep 12 17:38:48.829135 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:38:48.829143 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:38:48.829151 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:48.829159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:48.829168 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:38:48.829177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:38:48.829190 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:38:48.829199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:38:48.829208 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:38:48.829217 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:38:48.829225 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:48.829234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:48.829242 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:38:48.829252 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:38:48.829260 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:38:48.829269 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:38:48.829277 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:48.829285 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:48.829293 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:38:48.829302 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:38:48.829310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:48.829318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:48.829328 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:48.829336 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:38:48.829344 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:38:48.829353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:38:48.829361 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:38:48.829369 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:38:48.829378 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:38:48.829386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:38:48.829396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:48.829404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:48.829413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:48.829421 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:38:48.829445 systemd-journald[237]: Collecting audit messages is disabled. Sep 12 17:38:48.829467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:38:48.829476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:48.829485 systemd-journald[237]: Journal started Sep 12 17:38:48.829505 systemd-journald[237]: Runtime Journal (/run/log/journal/be5435210332444dbf2465be876b219a) is 5.9M, max 47.3M, 41.4M free. Sep 12 17:38:48.822310 systemd-modules-load[238]: Inserted module 'overlay' Sep 12 17:38:48.833179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:48.833205 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:38:48.834312 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:38:48.837584 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:38:48.839575 kernel: Bridge firewalling registered Sep 12 17:38:48.838007 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 12 17:38:48.839730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:38:48.841646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:38:48.842954 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:48.846420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:38:48.849235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:48.851824 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:48.857695 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:38:48.858578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:48.860203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:48.863078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:38:48.871152 dracut-cmdline[270]: dracut-dracut-053 Sep 12 17:38:48.873543 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:38:48.886333 systemd-resolved[275]: Positive Trust Anchors: Sep 12 17:38:48.886349 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:38:48.886380 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:38:48.890962 systemd-resolved[275]: Defaulting to hostname 'linux'. Sep 12 17:38:48.891812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:38:48.894609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:48.934563 kernel: SCSI subsystem initialized Sep 12 17:38:48.939554 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:38:48.946555 kernel: iscsi: registered transport (tcp) Sep 12 17:38:48.959568 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:38:48.959601 kernel: QLogic iSCSI HBA Driver Sep 12 17:38:49.000078 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:49.018657 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:38:49.034161 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:38:49.034211 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:38:49.034222 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:38:49.078552 kernel: raid6: neonx8 gen() 15764 MB/s Sep 12 17:38:49.095559 kernel: raid6: neonx4 gen() 15684 MB/s Sep 12 17:38:49.112557 kernel: raid6: neonx2 gen() 13211 MB/s Sep 12 17:38:49.129546 kernel: raid6: neonx1 gen() 10507 MB/s Sep 12 17:38:49.146545 kernel: raid6: int64x8 gen() 6958 MB/s Sep 12 17:38:49.163549 kernel: raid6: int64x4 gen() 7352 MB/s Sep 12 17:38:49.180550 kernel: raid6: int64x2 gen() 6133 MB/s Sep 12 17:38:49.197551 kernel: raid6: int64x1 gen() 5055 MB/s Sep 12 17:38:49.197576 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s Sep 12 17:38:49.214575 kernel: raid6: .... xor() 12061 MB/s, rmw enabled Sep 12 17:38:49.214601 kernel: raid6: using neon recovery algorithm Sep 12 17:38:49.219557 kernel: xor: measuring software checksum speed Sep 12 17:38:49.219585 kernel: 8regs : 19335 MB/sec Sep 12 17:38:49.221038 kernel: 32regs : 18370 MB/sec Sep 12 17:38:49.221051 kernel: arm64_neon : 26339 MB/sec Sep 12 17:38:49.221061 kernel: xor: using function: arm64_neon (26339 MB/sec) Sep 12 17:38:49.268563 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:38:49.279014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:49.291690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:49.302158 systemd-udevd[459]: Using default interface naming scheme 'v255'. Sep 12 17:38:49.305235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:49.307490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:38:49.321683 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Sep 12 17:38:49.346605 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:49.351670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:49.390773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:49.400737 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:38:49.412577 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:49.413874 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:49.415379 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:49.418375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:49.424726 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:38:49.436136 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:49.437999 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:38:49.444225 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:38:49.447578 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:38:49.447614 kernel: GPT:9289727 != 19775487 Sep 12 17:38:49.447625 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:38:49.448850 kernel: GPT:9289727 != 19775487 Sep 12 17:38:49.448896 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:38:49.449580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:49.453293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:49.453401 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:49.458041 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:49.459589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:49.459729 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:49.466386 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (516) Sep 12 17:38:49.466410 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (506) Sep 12 17:38:49.461885 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:49.471922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:49.480729 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:49.485528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:38:49.492886 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:38:49.497132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:38:49.500743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:38:49.501635 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:38:49.512674 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:38:49.514198 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:49.519851 disk-uuid[550]: Primary Header is updated. Sep 12 17:38:49.519851 disk-uuid[550]: Secondary Entries is updated. Sep 12 17:38:49.519851 disk-uuid[550]: Secondary Header is updated. Sep 12 17:38:49.523634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:49.526559 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:49.530227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:49.533528 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:50.528567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:50.529001 disk-uuid[551]: The operation has completed successfully. Sep 12 17:38:50.552265 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:38:50.552359 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:38:50.571682 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:38:50.575371 sh[573]: Success Sep 12 17:38:50.585656 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:38:50.610448 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:38:50.623797 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:38:50.625555 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:38:50.635024 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:38:50.635061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:38:50.635072 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:38:50.636973 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:38:50.636988 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:38:50.640155 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:38:50.641234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:38:50.641896 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:38:50.643430 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:38:50.654351 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:38:50.654391 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:38:50.654408 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:50.656682 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:50.663819 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:38:50.664881 kernel: BTRFS info (device vda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:38:50.670960 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:38:50.682705 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:38:50.730839 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:50.740717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:50.749601 ignition[669]: Ignition 2.19.0 Sep 12 17:38:50.749609 ignition[669]: Stage: fetch-offline Sep 12 17:38:50.749639 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:50.749647 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:38:50.749794 ignition[669]: parsed url from cmdline: "" Sep 12 17:38:50.749797 ignition[669]: no config URL provided Sep 12 17:38:50.749801 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:50.749808 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:50.749828 ignition[669]: op(1): [started] loading QEMU firmware config module Sep 12 17:38:50.749833 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:38:50.754976 ignition[669]: op(1): [finished] loading QEMU firmware config module Sep 12 17:38:50.759432 systemd-networkd[765]: lo: Link UP Sep 12 17:38:50.759444 systemd-networkd[765]: lo: Gained carrier Sep 12 17:38:50.760105 systemd-networkd[765]: Enumeration completed Sep 12 17:38:50.760696 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:50.760699 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:50.760834 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:50.761421 systemd-networkd[765]: eth0: Link UP Sep 12 17:38:50.761424 systemd-networkd[765]: eth0: Gained carrier Sep 12 17:38:50.761432 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:50.764616 systemd[1]: Reached target network.target - Network. Sep 12 17:38:50.789578 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:38:50.805259 ignition[669]: parsing config with SHA512: 868401db45bea3eb514dde07b2c1a609b6e048039e35828bfac278b47327ae65206fbed587433eacd3c315c614502f539b9a0d5e5ab015db9161179f0fc7f587 Sep 12 17:38:50.810075 unknown[669]: fetched base config from "system" Sep 12 17:38:50.810085 unknown[669]: fetched user config from "qemu" Sep 12 17:38:50.810728 ignition[669]: fetch-offline: fetch-offline passed Sep 12 17:38:50.810819 ignition[669]: Ignition finished successfully Sep 12 17:38:50.813168 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:50.815272 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:38:50.833665 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:38:50.843462 ignition[773]: Ignition 2.19.0 Sep 12 17:38:50.843470 ignition[773]: Stage: kargs Sep 12 17:38:50.843641 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:50.843651 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:38:50.844471 ignition[773]: kargs: kargs passed Sep 12 17:38:50.844509 ignition[773]: Ignition finished successfully Sep 12 17:38:50.847631 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:38:50.858713 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:38:50.867755 ignition[781]: Ignition 2.19.0 Sep 12 17:38:50.867764 ignition[781]: Stage: disks Sep 12 17:38:50.867914 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:50.867926 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:38:50.868758 ignition[781]: disks: disks passed Sep 12 17:38:50.870169 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:38:50.868799 ignition[781]: Ignition finished successfully Sep 12 17:38:50.871106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:50.872175 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:38:50.873593 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:50.874731 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:50.876156 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:50.885680 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:38:50.894888 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:38:50.899034 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:38:50.912647 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:38:50.951555 kernel: EXT4-fs (vda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:38:50.951916 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:38:50.952951 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:50.968614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:50.970056 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:38:50.971028 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:38:50.971110 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:38:50.971156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:50.977742 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (800) Sep 12 17:38:50.977356 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:38:50.981128 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:38:50.981146 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:38:50.981157 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:50.979940 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:38:50.984558 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:50.985637 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:51.015497 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:38:51.019416 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:38:51.022448 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:38:51.025950 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:38:51.088566 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:51.096652 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:38:51.098012 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:38:51.103552 kernel: BTRFS info (device vda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:38:51.116229 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:38:51.121137 ignition[915]: INFO : Ignition 2.19.0 Sep 12 17:38:51.121137 ignition[915]: INFO : Stage: mount Sep 12 17:38:51.123516 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:51.123516 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:38:51.123516 ignition[915]: INFO : mount: mount passed Sep 12 17:38:51.123516 ignition[915]: INFO : Ignition finished successfully Sep 12 17:38:51.125277 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:38:51.140638 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:38:51.634488 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:38:51.646700 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:51.651555 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (927) Sep 12 17:38:51.653185 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:38:51.653203 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:38:51.653214 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:51.655553 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:51.656526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:51.672024 ignition[944]: INFO : Ignition 2.19.0 Sep 12 17:38:51.672024 ignition[944]: INFO : Stage: files Sep 12 17:38:51.673215 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:51.673215 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:38:51.673215 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:38:51.675893 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:38:51.675893 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:38:51.678832 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:38:51.679827 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:38:51.679827 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:38:51.679276 unknown[944]: wrote ssh authorized keys file for user: core Sep 12 17:38:51.682707 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:38:51.682707 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:38:51.735040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:38:52.115516 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:38:52.115516 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:38:52.118701 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:38:52.134709 systemd-networkd[765]: eth0: Gained IPv6LL Sep 12 17:38:52.546161 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:38:52.814252 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:38:52.814252 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:38:52.817468 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:38:52.834764 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:38:52.838266 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:38:52.840693 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:38:52.840693 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:52.840693 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:52.840693 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:52.840693 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:52.840693 ignition[944]: INFO : files: files passed Sep 12 17:38:52.840693 ignition[944]: INFO : Ignition finished successfully Sep 12 17:38:52.841061 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:38:52.852665 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:38:52.854197 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:38:52.855446 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:38:52.855551 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:38:52.861471 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:38:52.863655 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:52.863655 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:52.866106 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:52.866847 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:52.868390 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:38:52.880864 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:38:52.898199 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:38:52.898291 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:38:52.899946 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:38:52.901355 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:38:52.902841 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:38:52.903484 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:38:52.917377 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:52.919428 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:38:52.929515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:52.930409 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:52.931981 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:38:52.933242 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:38:52.933346 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:52.935342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:38:52.936991 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:38:52.938264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:38:52.939498 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:52.941015 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:52.942425 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:38:52.943830 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:52.945304 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:38:52.946822 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:38:52.948134 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:38:52.949221 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:38:52.949323 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:52.951069 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:52.952450 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:52.954034 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:38:52.955409 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:52.956398 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:38:52.956502 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:52.958673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:38:52.958781 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:52.960247 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:38:52.961387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:38:52.967583 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:52.968511 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:38:52.970199 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:38:52.971424 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:38:52.971504 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:52.972664 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:38:52.972737 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:52.973862 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:38:52.973961 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:52.975366 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:38:52.975457 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:38:52.987675 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:38:52.988329 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:38:52.988444 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:52.992737 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:38:52.993381 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:38:52.993499 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:52.999037 ignition[998]: INFO : Ignition 2.19.0 Sep 12 17:38:52.999037 ignition[998]: INFO : Stage: umount Sep 12 17:38:52.999037 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:52.999037 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:38:52.996358 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:38:53.002680 ignition[998]: INFO : umount: umount passed Sep 12 17:38:53.002680 ignition[998]: INFO : Ignition finished successfully Sep 12 17:38:52.996458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:53.000952 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:38:53.002606 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:38:53.003852 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:38:53.003927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:38:53.006489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:38:53.007023 systemd[1]: Stopped target network.target - Network. Sep 12 17:38:53.008362 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:38:53.008418 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:38:53.010916 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:38:53.010960 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:38:53.012196 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:38:53.012232 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:38:53.013574 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:38:53.013616 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:53.015039 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:38:53.016279 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:38:53.023278 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:38:53.023377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:38:53.024581 systemd-networkd[765]: eth0: DHCPv6 lease lost Sep 12 17:38:53.025449 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:38:53.025497 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:53.027159 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:38:53.027259 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:38:53.028875 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:38:53.028933 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:53.035621 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:38:53.036273 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:38:53.036324 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:53.037842 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:38:53.037883 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:53.039266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:38:53.039305 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:53.041006 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:53.049210 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:38:53.049312 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:38:53.053351 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:38:53.053487 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:53.056401 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:38:53.056438 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:53.057416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:38:53.057447 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:53.058274 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:38:53.058315 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:53.060564 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:38:53.060634 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:53.062698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:53.062742 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:53.070720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:38:53.071560 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:38:53.071611 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:53.073413 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:53.073454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:53.075208 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:38:53.076569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:38:53.078146 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:38:53.078225 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:38:53.080050 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:38:53.081501 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:38:53.081610 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:53.083743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:38:53.092130 systemd[1]: Switching root. Sep 12 17:38:53.122383 systemd-journald[237]: Journal stopped Sep 12 17:38:53.774115 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 12 17:38:53.774180 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:38:53.774192 kernel: SELinux: policy capability open_perms=1 Sep 12 17:38:53.774202 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:38:53.774212 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:38:53.774221 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:38:53.774231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:38:53.774241 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:38:53.774250 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:38:53.774262 kernel: audit: type=1403 audit(1757698733.267:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:38:53.774273 systemd[1]: Successfully loaded SELinux policy in 30.469ms. Sep 12 17:38:53.774291 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.087ms. Sep 12 17:38:53.774305 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:38:53.774316 systemd[1]: Detected virtualization kvm. Sep 12 17:38:53.774326 systemd[1]: Detected architecture arm64. Sep 12 17:38:53.774336 systemd[1]: Detected first boot. Sep 12 17:38:53.774346 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:38:53.774361 zram_generator::config[1042]: No configuration found. Sep 12 17:38:53.774374 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:38:53.774384 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:38:53.774396 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:38:53.774406 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:38:53.774417 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:38:53.774432 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:38:53.774443 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:38:53.774453 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:38:53.774465 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:38:53.774476 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:38:53.774487 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:38:53.774498 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:38:53.774508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:53.774518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:53.774529 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:38:53.774550 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:38:53.774561 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:38:53.774574 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:38:53.774584 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:38:53.774594 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:53.774605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:38:53.774616 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:38:53.774626 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:53.774637 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:38:53.774648 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:53.774659 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:53.774670 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:38:53.774680 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:38:53.774690 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:38:53.774701 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:38:53.774712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:53.774722 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:53.774733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:53.774743 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:38:53.774755 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:38:53.774766 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:38:53.774777 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:38:53.774787 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:38:53.774797 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:38:53.774807 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:38:53.774818 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:38:53.774828 systemd[1]: Reached target machines.target - Containers. Sep 12 17:38:53.774840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:38:53.774851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:53.774861 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:38:53.774871 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:38:53.774882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:53.774892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:53.774902 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:53.774913 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:38:53.774923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:53.774935 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:38:53.774946 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:38:53.774956 kernel: fuse: init (API version 7.39) Sep 12 17:38:53.774966 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:38:53.774976 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:38:53.774986 kernel: loop: module loaded Sep 12 17:38:53.774997 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:38:53.775006 kernel: ACPI: bus type drm_connector registered Sep 12 17:38:53.775016 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:38:53.775028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:38:53.775039 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:38:53.775049 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:38:53.775060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:53.775089 systemd-journald[1116]: Collecting audit messages is disabled. Sep 12 17:38:53.775112 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:38:53.775125 systemd[1]: Stopped verity-setup.service. Sep 12 17:38:53.775135 systemd-journald[1116]: Journal started Sep 12 17:38:53.775156 systemd-journald[1116]: Runtime Journal (/run/log/journal/be5435210332444dbf2465be876b219a) is 5.9M, max 47.3M, 41.4M free. Sep 12 17:38:53.601798 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:38:53.619506 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:38:53.619842 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:38:53.778565 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:38:53.778843 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:38:53.779811 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:38:53.780730 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:38:53.781565 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:38:53.782464 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:38:53.783500 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:38:53.784575 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:38:53.787696 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:53.788815 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:38:53.788941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:38:53.790208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:53.790359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:53.791491 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:53.791672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:53.792684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:53.792815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:53.793960 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:38:53.794092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:38:53.795166 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:53.795287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:53.796427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:53.797609 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:38:53.798840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:38:53.810354 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:38:53.819628 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:38:53.821367 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:38:53.822273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:38:53.822307 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:53.824003 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:38:53.825857 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:38:53.827612 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:38:53.828452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:53.829747 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:38:53.831341 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:38:53.832385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:53.833684 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:38:53.834609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:53.839620 systemd-journald[1116]: Time spent on flushing to /var/log/journal/be5435210332444dbf2465be876b219a is 29.443ms for 852 entries. Sep 12 17:38:53.839620 systemd-journald[1116]: System Journal (/var/log/journal/be5435210332444dbf2465be876b219a) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:38:53.877166 systemd-journald[1116]: Received client request to flush runtime journal. Sep 12 17:38:53.880602 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 17:38:53.880641 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:38:53.837724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:38:53.840210 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:38:53.842706 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:38:53.846924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:53.848066 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:38:53.849137 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:38:53.850324 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:38:53.867980 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:38:53.869172 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:38:53.871622 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:38:53.872865 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:38:53.878857 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:38:53.886738 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:38:53.891022 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:38:53.892480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:53.900124 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:38:53.904754 kernel: loop1: detected capacity change from 0 to 211168 Sep 12 17:38:53.907434 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:38:53.908057 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:38:53.912700 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 12 17:38:53.913022 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 12 17:38:53.917892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:53.943664 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 17:38:53.967570 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 17:38:53.976555 kernel: loop4: detected capacity change from 0 to 211168 Sep 12 17:38:53.987569 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 17:38:53.993868 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:38:53.994317 (sd-merge)[1182]: Merged extensions into '/usr'. Sep 12 17:38:53.998026 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:38:53.998145 systemd[1]: Reloading... Sep 12 17:38:54.049641 zram_generator::config[1208]: No configuration found. Sep 12 17:38:54.118292 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:38:54.137724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:54.173108 systemd[1]: Reloading finished in 174 ms. Sep 12 17:38:54.201188 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:38:54.203994 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:38:54.213690 systemd[1]: Starting ensure-sysext.service... Sep 12 17:38:54.215355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:38:54.220612 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:38:54.220625 systemd[1]: Reloading... Sep 12 17:38:54.232192 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:38:54.232762 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:38:54.233504 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:38:54.233858 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 12 17:38:54.233981 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 12 17:38:54.236050 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:54.236619 systemd-tmpfiles[1243]: Skipping /boot Sep 12 17:38:54.243642 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:54.243724 systemd-tmpfiles[1243]: Skipping /boot Sep 12 17:38:54.261607 zram_generator::config[1270]: No configuration found. Sep 12 17:38:54.346590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:54.381970 systemd[1]: Reloading finished in 161 ms. Sep 12 17:38:54.397598 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:38:54.406176 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:54.415141 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:38:54.417790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:38:54.419841 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:38:54.424796 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:38:54.429135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:54.434159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:38:54.437949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:54.440798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:54.442966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:54.447597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:54.448524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:54.452106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:38:54.454420 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:38:54.457091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:54.457220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:54.458673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:54.458790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:54.460289 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:54.460399 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:54.464637 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:38:54.465948 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Sep 12 17:38:54.471371 augenrules[1336]: No rules Sep 12 17:38:54.473603 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:38:54.476596 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:38:54.479749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:54.486788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:54.490820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:54.496164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:54.499685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:54.500957 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:38:54.501842 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:38:54.502656 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:54.504083 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:38:54.505532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:54.505753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:54.507043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:54.507193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:54.514101 systemd[1]: Finished ensure-sysext.service. Sep 12 17:38:54.518196 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:38:54.526786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:54.528856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:54.534743 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:54.537668 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:54.538632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:54.541717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:54.545662 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1353) Sep 12 17:38:54.546855 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:38:54.548137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:38:54.548633 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:54.549330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:54.550519 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:54.550826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:54.553936 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:54.554101 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:54.554710 systemd-resolved[1311]: Positive Trust Anchors: Sep 12 17:38:54.554727 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:38:54.554759 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:38:54.555175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:54.555293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:54.562908 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:38:54.564706 systemd-resolved[1311]: Defaulting to hostname 'linux'. Sep 12 17:38:54.569722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:38:54.576187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:54.577986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:54.578057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:54.598977 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:38:54.605720 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:38:54.617635 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:38:54.618877 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:38:54.619977 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:38:54.632222 systemd-networkd[1382]: lo: Link UP Sep 12 17:38:54.632234 systemd-networkd[1382]: lo: Gained carrier Sep 12 17:38:54.632968 systemd-networkd[1382]: Enumeration completed Sep 12 17:38:54.633053 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:54.634106 systemd[1]: Reached target network.target - Network. Sep 12 17:38:54.636258 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:54.636269 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:54.636896 systemd-networkd[1382]: eth0: Link UP Sep 12 17:38:54.636902 systemd-networkd[1382]: eth0: Gained carrier Sep 12 17:38:54.636915 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:54.643042 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:38:54.650634 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:38:54.651615 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Sep 12 17:38:54.654753 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:38:54.654801 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2025-09-12 17:38:54.474350 UTC. Sep 12 17:38:54.671773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:54.678911 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:38:54.681758 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:38:54.696219 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:54.707584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:54.728864 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:38:54.729980 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:54.730872 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:54.731738 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:38:54.732626 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:38:54.733663 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:38:54.734514 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:38:54.735405 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:38:54.736396 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:38:54.736428 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:38:54.737136 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:38:54.739603 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:38:54.741715 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:38:54.749378 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:38:54.751313 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:38:54.752654 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:38:54.753514 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:38:54.754204 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:54.754934 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:54.754963 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:54.755832 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:38:54.757494 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:38:54.758739 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:54.761700 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:38:54.764781 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:38:54.765754 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:38:54.766825 jq[1412]: false Sep 12 17:38:54.767495 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:38:54.769699 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:38:54.772776 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:38:54.775516 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:38:54.778855 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:38:54.779994 extend-filesystems[1413]: Found loop3 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found loop4 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found loop5 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda1 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda2 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda3 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found usr Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda4 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda6 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda7 Sep 12 17:38:54.779994 extend-filesystems[1413]: Found vda9 Sep 12 17:38:54.779994 extend-filesystems[1413]: Checking size of /dev/vda9 Sep 12 17:38:54.825562 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:38:54.825690 extend-filesystems[1413]: Resized partition /dev/vda9 Sep 12 17:38:54.788964 dbus-daemon[1411]: [system] SELinux support is enabled Sep 12 17:38:54.831419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1362) Sep 12 17:38:54.782644 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:38:54.831517 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:38:54.838860 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:38:54.783014 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:38:54.846333 jq[1430]: true Sep 12 17:38:54.784850 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:38:54.787113 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:38:54.790196 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:38:54.846730 jq[1436]: true Sep 12 17:38:54.792870 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:38:54.846848 tar[1435]: linux-arm64/LICENSE Sep 12 17:38:54.846848 tar[1435]: linux-arm64/helm Sep 12 17:38:54.797617 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:38:54.797783 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:38:54.798036 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:38:54.798185 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:38:54.801837 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:38:54.801982 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:38:54.811981 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:38:54.812011 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:38:54.814848 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:38:54.814864 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:38:54.819831 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:38:54.852928 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:38:54.852928 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:38:54.852928 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:38:54.866101 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Sep 12 17:38:54.854238 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:38:54.880748 bash[1465]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:38:54.855611 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:38:54.858802 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:38:54.859119 systemd-logind[1420]: New seat seat0. Sep 12 17:38:54.862631 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:38:54.872994 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:38:54.884597 update_engine[1428]: I20250912 17:38:54.882820 1428 main.cc:92] Flatcar Update Engine starting Sep 12 17:38:54.885042 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:38:54.888449 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:38:54.891860 update_engine[1428]: I20250912 17:38:54.891819 1428 update_check_scheduler.cc:74] Next update check in 6m34s Sep 12 17:38:54.895759 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:38:54.931633 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:38:54.972711 containerd[1439]: time="2025-09-12T17:38:54.972628920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:38:54.997156 containerd[1439]: time="2025-09-12T17:38:54.997118960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998491 containerd[1439]: time="2025-09-12T17:38:54.998456120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998491 containerd[1439]: time="2025-09-12T17:38:54.998488960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:38:54.998545 containerd[1439]: time="2025-09-12T17:38:54.998504240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:38:54.998678 containerd[1439]: time="2025-09-12T17:38:54.998655360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:38:54.998703 containerd[1439]: time="2025-09-12T17:38:54.998679840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998745 containerd[1439]: time="2025-09-12T17:38:54.998728640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998769 containerd[1439]: time="2025-09-12T17:38:54.998743480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998905 containerd[1439]: time="2025-09-12T17:38:54.998884360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998937 containerd[1439]: time="2025-09-12T17:38:54.998904280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998937 containerd[1439]: time="2025-09-12T17:38:54.998916600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:54.998937 containerd[1439]: time="2025-09-12T17:38:54.998925400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:54.999014 containerd[1439]: time="2025-09-12T17:38:54.998997200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:54.999216 containerd[1439]: time="2025-09-12T17:38:54.999195800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:54.999313 containerd[1439]: time="2025-09-12T17:38:54.999293200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:54.999313 containerd[1439]: time="2025-09-12T17:38:54.999310640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:38:54.999411 containerd[1439]: time="2025-09-12T17:38:54.999394440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:38:54.999452 containerd[1439]: time="2025-09-12T17:38:54.999438560Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:38:55.002856 containerd[1439]: time="2025-09-12T17:38:55.002831686Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:38:55.002882 containerd[1439]: time="2025-09-12T17:38:55.002875558Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:38:55.002901 containerd[1439]: time="2025-09-12T17:38:55.002890534Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:38:55.002929 containerd[1439]: time="2025-09-12T17:38:55.002904063Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:38:55.002929 containerd[1439]: time="2025-09-12T17:38:55.002916810Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:38:55.003069 containerd[1439]: time="2025-09-12T17:38:55.003048582Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:38:55.003263 containerd[1439]: time="2025-09-12T17:38:55.003243463Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:38:55.003351 containerd[1439]: time="2025-09-12T17:38:55.003334608Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:38:55.003374 containerd[1439]: time="2025-09-12T17:38:55.003353533Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:38:55.003374 containerd[1439]: time="2025-09-12T17:38:55.003366241Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:38:55.003417 containerd[1439]: time="2025-09-12T17:38:55.003379027Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003417 containerd[1439]: time="2025-09-12T17:38:55.003391266Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003417 containerd[1439]: time="2025-09-12T17:38:55.003403387Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003464 containerd[1439]: time="2025-09-12T17:38:55.003416525Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003464 containerd[1439]: time="2025-09-12T17:38:55.003430289Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003464 containerd[1439]: time="2025-09-12T17:38:55.003441941Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003464 containerd[1439]: time="2025-09-12T17:38:55.003457151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003541 containerd[1439]: time="2025-09-12T17:38:55.003468569Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:38:55.003541 containerd[1439]: time="2025-09-12T17:38:55.003486908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003541 containerd[1439]: time="2025-09-12T17:38:55.003503799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003541 containerd[1439]: time="2025-09-12T17:38:55.003523819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003616 containerd[1439]: time="2025-09-12T17:38:55.003546381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003616 containerd[1439]: time="2025-09-12T17:38:55.003558385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003616 containerd[1439]: time="2025-09-12T17:38:55.003574025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003616 containerd[1439]: time="2025-09-12T17:38:55.003586694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003616 containerd[1439]: time="2025-09-12T17:38:55.003598816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003616 containerd[1439]: time="2025-09-12T17:38:55.003611250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003715 containerd[1439]: time="2025-09-12T17:38:55.003625170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003715 containerd[1439]: time="2025-09-12T17:38:55.003637057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003715 containerd[1439]: time="2025-09-12T17:38:55.003649530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003715 containerd[1439]: time="2025-09-12T17:38:55.003661573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003715 containerd[1439]: time="2025-09-12T17:38:55.003678074Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:38:55.003715 containerd[1439]: time="2025-09-12T17:38:55.003696217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003715 containerd[1439]: time="2025-09-12T17:38:55.003707322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003829 containerd[1439]: time="2025-09-12T17:38:55.003717175Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:38:55.003829 containerd[1439]: time="2025-09-12T17:38:55.003823101Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:38:55.003862 containerd[1439]: time="2025-09-12T17:38:55.003838507Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:38:55.003862 containerd[1439]: time="2025-09-12T17:38:55.003848087Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:38:55.003862 containerd[1439]: time="2025-09-12T17:38:55.003858175Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:38:55.003956 containerd[1439]: time="2025-09-12T17:38:55.003867168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.003956 containerd[1439]: time="2025-09-12T17:38:55.003878468Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:38:55.003956 containerd[1439]: time="2025-09-12T17:38:55.003887071Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:38:55.003956 containerd[1439]: time="2025-09-12T17:38:55.003899857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:38:55.006459 containerd[1439]: time="2025-09-12T17:38:55.004221309Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:38:55.006459 containerd[1439]: time="2025-09-12T17:38:55.004281760Z" level=info msg="Connect containerd service" Sep 12 17:38:55.006459 containerd[1439]: time="2025-09-12T17:38:55.005771677Z" level=info msg="using legacy CRI server" Sep 12 17:38:55.006459 containerd[1439]: time="2025-09-12T17:38:55.005865716Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:38:55.006459 containerd[1439]: time="2025-09-12T17:38:55.005963235Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:38:55.006655 containerd[1439]: time="2025-09-12T17:38:55.006569032Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:38:55.006769 containerd[1439]: time="2025-09-12T17:38:55.006725085Z" level=info msg="Start subscribing containerd event" Sep 12 17:38:55.006801 containerd[1439]: time="2025-09-12T17:38:55.006779749Z" level=info msg="Start recovering state" Sep 12 17:38:55.006861 containerd[1439]: time="2025-09-12T17:38:55.006847394Z" level=info msg="Start event monitor" Sep 12 17:38:55.006886 containerd[1439]: time="2025-09-12T17:38:55.006862605Z" level=info msg="Start snapshots syncer" Sep 12 17:38:55.006886 containerd[1439]: time="2025-09-12T17:38:55.006870894Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:38:55.006919 containerd[1439]: time="2025-09-12T17:38:55.006886378Z" level=info msg="Start streaming server" Sep 12 17:38:55.006988 containerd[1439]: time="2025-09-12T17:38:55.006969156Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:38:55.007024 containerd[1439]: time="2025-09-12T17:38:55.007011033Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:38:55.008054 containerd[1439]: time="2025-09-12T17:38:55.007056234Z" level=info msg="containerd successfully booted in 0.035725s" Sep 12 17:38:55.007128 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:38:55.202742 tar[1435]: linux-arm64/README.md Sep 12 17:38:55.216373 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:38:55.366323 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:38:55.386584 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:38:55.398825 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:38:55.404037 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:38:55.405584 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:38:55.407925 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:38:55.418649 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:38:55.421102 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:38:55.422935 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:38:55.424039 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:38:56.230762 systemd-networkd[1382]: eth0: Gained IPv6LL Sep 12 17:38:56.233406 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:38:56.235052 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:38:56.247791 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:38:56.250141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:56.252024 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:38:56.267131 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:38:56.268086 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:38:56.269827 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:38:56.271580 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:38:56.783274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:56.784551 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:38:56.786661 systemd[1]: Startup finished in 510ms (kernel) + 4.591s (initrd) + 3.549s (userspace) = 8.651s. Sep 12 17:38:56.787804 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:38:57.125737 kubelet[1524]: E0912 17:38:57.125630 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:38:57.128055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:38:57.128200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:01.689128 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:39:01.690200 systemd[1]: Started sshd@0-10.0.0.153:22-10.0.0.1:44456.service - OpenSSH per-connection server daemon (10.0.0.1:44456). Sep 12 17:39:01.732926 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 44456 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:39:01.734518 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:01.742100 systemd-logind[1420]: New session 1 of user core. Sep 12 17:39:01.742992 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:39:01.754736 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:39:01.764606 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:39:01.766449 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:39:01.772110 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:39:01.840160 systemd[1542]: Queued start job for default target default.target. Sep 12 17:39:01.851385 systemd[1542]: Created slice app.slice - User Application Slice. Sep 12 17:39:01.851410 systemd[1542]: Reached target paths.target - Paths. Sep 12 17:39:01.851422 systemd[1542]: Reached target timers.target - Timers. Sep 12 17:39:01.852478 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:39:01.860659 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:39:01.860713 systemd[1542]: Reached target sockets.target - Sockets. Sep 12 17:39:01.860724 systemd[1542]: Reached target basic.target - Basic System. Sep 12 17:39:01.860756 systemd[1542]: Reached target default.target - Main User Target. Sep 12 17:39:01.860779 systemd[1542]: Startup finished in 84ms. Sep 12 17:39:01.860974 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:39:01.862434 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:39:01.923778 systemd[1]: Started sshd@1-10.0.0.153:22-10.0.0.1:44460.service - OpenSSH per-connection server daemon (10.0.0.1:44460). Sep 12 17:39:01.969596 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 44460 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:39:01.970691 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:01.975394 systemd-logind[1420]: New session 2 of user core. Sep 12 17:39:01.983698 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:39:02.033745 sshd[1553]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:02.046676 systemd[1]: sshd@1-10.0.0.153:22-10.0.0.1:44460.service: Deactivated successfully. Sep 12 17:39:02.049841 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:39:02.051116 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:39:02.052776 systemd[1]: Started sshd@2-10.0.0.153:22-10.0.0.1:44462.service - OpenSSH per-connection server daemon (10.0.0.1:44462). Sep 12 17:39:02.053566 systemd-logind[1420]: Removed session 2. Sep 12 17:39:02.086992 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 44462 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:39:02.088114 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:02.091588 systemd-logind[1420]: New session 3 of user core. Sep 12 17:39:02.102669 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:39:02.150247 sshd[1560]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:02.158767 systemd[1]: sshd@2-10.0.0.153:22-10.0.0.1:44462.service: Deactivated successfully. Sep 12 17:39:02.160108 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:39:02.161300 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:39:02.162326 systemd[1]: Started sshd@3-10.0.0.153:22-10.0.0.1:44476.service - OpenSSH per-connection server daemon (10.0.0.1:44476). Sep 12 17:39:02.165156 systemd-logind[1420]: Removed session 3. Sep 12 17:39:02.196794 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 44476 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:39:02.197863 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:02.201368 systemd-logind[1420]: New session 4 of user core. Sep 12 17:39:02.212659 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:39:02.262517 sshd[1567]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:02.274600 systemd[1]: sshd@3-10.0.0.153:22-10.0.0.1:44476.service: Deactivated successfully. Sep 12 17:39:02.275853 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:39:02.277730 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:39:02.279180 systemd[1]: Started sshd@4-10.0.0.153:22-10.0.0.1:44490.service - OpenSSH per-connection server daemon (10.0.0.1:44490). Sep 12 17:39:02.280106 systemd-logind[1420]: Removed session 4. Sep 12 17:39:02.313155 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 44490 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:39:02.314277 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:02.317450 systemd-logind[1420]: New session 5 of user core. Sep 12 17:39:02.332697 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:39:02.388697 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:39:02.388967 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:02.403346 sudo[1577]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:02.405048 sshd[1574]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:02.413953 systemd[1]: sshd@4-10.0.0.153:22-10.0.0.1:44490.service: Deactivated successfully. Sep 12 17:39:02.415465 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:39:02.416781 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:39:02.418054 systemd[1]: Started sshd@5-10.0.0.153:22-10.0.0.1:44498.service - OpenSSH per-connection server daemon (10.0.0.1:44498). Sep 12 17:39:02.418765 systemd-logind[1420]: Removed session 5. Sep 12 17:39:02.454077 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 44498 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:39:02.455324 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:02.459279 systemd-logind[1420]: New session 6 of user core. Sep 12 17:39:02.474691 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:39:02.526186 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:39:02.526484 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:02.530101 sudo[1586]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:02.534288 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:39:02.534562 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:02.551750 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:39:02.552801 auditctl[1589]: No rules Sep 12 17:39:02.553596 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:39:02.554617 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:39:02.556245 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:39:02.578000 augenrules[1607]: No rules Sep 12 17:39:02.579187 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:39:02.580123 sudo[1585]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:02.581563 sshd[1582]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:02.593417 systemd[1]: sshd@5-10.0.0.153:22-10.0.0.1:44498.service: Deactivated successfully. Sep 12 17:39:02.595023 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:39:02.597571 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:39:02.598630 systemd[1]: Started sshd@6-10.0.0.153:22-10.0.0.1:44510.service - OpenSSH per-connection server daemon (10.0.0.1:44510). Sep 12 17:39:02.599941 systemd-logind[1420]: Removed session 6. Sep 12 17:39:02.633606 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 44510 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:39:02.634783 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:02.639599 systemd-logind[1420]: New session 7 of user core. Sep 12 17:39:02.649695 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:39:02.698973 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:39:02.699242 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:02.957786 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:39:02.957930 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:39:03.168968 dockerd[1637]: time="2025-09-12T17:39:03.168907007Z" level=info msg="Starting up" Sep 12 17:39:03.312322 dockerd[1637]: time="2025-09-12T17:39:03.312203557Z" level=info msg="Loading containers: start." Sep 12 17:39:03.391560 kernel: Initializing XFRM netlink socket Sep 12 17:39:03.449441 systemd-networkd[1382]: docker0: Link UP Sep 12 17:39:03.469727 dockerd[1637]: time="2025-09-12T17:39:03.469694280Z" level=info msg="Loading containers: done." Sep 12 17:39:03.480298 dockerd[1637]: time="2025-09-12T17:39:03.480260359Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:39:03.480449 dockerd[1637]: time="2025-09-12T17:39:03.480376654Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:39:03.480515 dockerd[1637]: time="2025-09-12T17:39:03.480499498Z" level=info msg="Daemon has completed initialization" Sep 12 17:39:03.506867 dockerd[1637]: time="2025-09-12T17:39:03.506651919Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:39:03.506820 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:39:04.020163 containerd[1439]: time="2025-09-12T17:39:04.020018550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:39:04.773454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205888278.mount: Deactivated successfully. Sep 12 17:39:05.965462 containerd[1439]: time="2025-09-12T17:39:05.965407109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:05.966601 containerd[1439]: time="2025-09-12T17:39:05.966567291Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 12 17:39:05.967560 containerd[1439]: time="2025-09-12T17:39:05.967197584Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:05.971108 containerd[1439]: time="2025-09-12T17:39:05.971069245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:05.971841 containerd[1439]: time="2025-09-12T17:39:05.971806581Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.951743205s" Sep 12 17:39:05.971900 containerd[1439]: time="2025-09-12T17:39:05.971844436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:39:05.975550 containerd[1439]: time="2025-09-12T17:39:05.975382401Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:39:07.370486 containerd[1439]: time="2025-09-12T17:39:07.370418425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:07.371780 containerd[1439]: time="2025-09-12T17:39:07.371743322Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 12 17:39:07.372627 containerd[1439]: time="2025-09-12T17:39:07.372601581Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:07.375889 containerd[1439]: time="2025-09-12T17:39:07.375842090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:07.376891 containerd[1439]: time="2025-09-12T17:39:07.376864841Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.401441827s" Sep 12 17:39:07.376942 containerd[1439]: time="2025-09-12T17:39:07.376896298Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:39:07.377318 containerd[1439]: time="2025-09-12T17:39:07.377296001Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:39:07.378460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:39:07.392803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:07.488933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:07.492619 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:07.574868 kubelet[1855]: E0912 17:39:07.574824 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:07.578011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:07.578162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:08.747608 containerd[1439]: time="2025-09-12T17:39:08.746931137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:08.747608 containerd[1439]: time="2025-09-12T17:39:08.747314174Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 12 17:39:08.748258 containerd[1439]: time="2025-09-12T17:39:08.748208220Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:08.752050 containerd[1439]: time="2025-09-12T17:39:08.751994090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:08.753084 containerd[1439]: time="2025-09-12T17:39:08.753058658Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.37573008s" Sep 12 17:39:08.753242 containerd[1439]: time="2025-09-12T17:39:08.753151569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:39:08.754167 containerd[1439]: time="2025-09-12T17:39:08.754142669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:39:09.709962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021965703.mount: Deactivated successfully. Sep 12 17:39:09.933244 containerd[1439]: time="2025-09-12T17:39:09.933196362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:09.934066 containerd[1439]: time="2025-09-12T17:39:09.933884091Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 12 17:39:09.934687 containerd[1439]: time="2025-09-12T17:39:09.934651304Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:09.937523 containerd[1439]: time="2025-09-12T17:39:09.936759773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:09.937523 containerd[1439]: time="2025-09-12T17:39:09.937510125Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.183333585s" Sep 12 17:39:09.937642 containerd[1439]: time="2025-09-12T17:39:09.937554491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:39:09.938085 containerd[1439]: time="2025-09-12T17:39:09.938063361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:39:10.468400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547837849.mount: Deactivated successfully. Sep 12 17:39:11.470564 containerd[1439]: time="2025-09-12T17:39:11.470400123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.471719 containerd[1439]: time="2025-09-12T17:39:11.471684387Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 12 17:39:11.473127 containerd[1439]: time="2025-09-12T17:39:11.473087495Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.475790 containerd[1439]: time="2025-09-12T17:39:11.475738245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.477289 containerd[1439]: time="2025-09-12T17:39:11.477060169Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.538965588s" Sep 12 17:39:11.477289 containerd[1439]: time="2025-09-12T17:39:11.477095954Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:39:11.477599 containerd[1439]: time="2025-09-12T17:39:11.477573404Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:39:11.903512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549897377.mount: Deactivated successfully. Sep 12 17:39:11.908020 containerd[1439]: time="2025-09-12T17:39:11.907977259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.909059 containerd[1439]: time="2025-09-12T17:39:11.909030020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:39:11.910154 containerd[1439]: time="2025-09-12T17:39:11.910121277Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.912586 containerd[1439]: time="2025-09-12T17:39:11.912015878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.913522 containerd[1439]: time="2025-09-12T17:39:11.913486328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.883282ms" Sep 12 17:39:11.913578 containerd[1439]: time="2025-09-12T17:39:11.913524705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:39:11.914071 containerd[1439]: time="2025-09-12T17:39:11.914035507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:39:12.373341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735156553.mount: Deactivated successfully. Sep 12 17:39:14.469089 containerd[1439]: time="2025-09-12T17:39:14.469038090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:14.471206 containerd[1439]: time="2025-09-12T17:39:14.470890433Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 12 17:39:14.614386 containerd[1439]: time="2025-09-12T17:39:14.614315792Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:14.621247 containerd[1439]: time="2025-09-12T17:39:14.621178535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:14.622510 containerd[1439]: time="2025-09-12T17:39:14.622387343Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.708314248s" Sep 12 17:39:14.622510 containerd[1439]: time="2025-09-12T17:39:14.622421323Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:39:17.828440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:39:17.836776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:17.931492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:17.934936 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:17.966629 kubelet[2019]: E0912 17:39:17.966564 2019 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:17.969402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:17.969654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:22.273827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:22.290102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:22.321758 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit session-7.scope)... Sep 12 17:39:22.321773 systemd[1]: Reloading... Sep 12 17:39:22.393565 zram_generator::config[2077]: No configuration found. Sep 12 17:39:22.551595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:22.604999 systemd[1]: Reloading finished in 282 ms. Sep 12 17:39:22.644730 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:22.647697 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:39:22.647883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:22.649274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:22.752687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:22.756516 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:22.789607 kubelet[2121]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:22.789607 kubelet[2121]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:22.789607 kubelet[2121]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:22.789901 kubelet[2121]: I0912 17:39:22.789670 2121 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:23.507334 kubelet[2121]: I0912 17:39:23.507288 2121 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:39:23.507334 kubelet[2121]: I0912 17:39:23.507320 2121 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:23.507579 kubelet[2121]: I0912 17:39:23.507563 2121 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:39:23.525569 kubelet[2121]: E0912 17:39:23.524598 2121 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:39:23.526025 kubelet[2121]: I0912 17:39:23.525988 2121 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:23.532814 kubelet[2121]: E0912 17:39:23.532779 2121 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:23.532814 kubelet[2121]: I0912 17:39:23.532815 2121 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:23.535254 kubelet[2121]: I0912 17:39:23.535236 2121 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:23.536257 kubelet[2121]: I0912 17:39:23.536214 2121 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:23.536394 kubelet[2121]: I0912 17:39:23.536252 2121 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:39:23.536475 kubelet[2121]: I0912 17:39:23.536457 2121 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:23.536475 kubelet[2121]: I0912 17:39:23.536466 2121 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:39:23.536672 kubelet[2121]: I0912 17:39:23.536659 2121 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:23.541033 kubelet[2121]: I0912 17:39:23.540923 2121 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:39:23.541033 kubelet[2121]: I0912 17:39:23.540958 2121 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:23.541033 kubelet[2121]: I0912 17:39:23.540994 2121 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:39:23.542140 kubelet[2121]: I0912 17:39:23.542047 2121 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:23.543401 kubelet[2121]: I0912 17:39:23.543058 2121 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:23.543738 kubelet[2121]: E0912 17:39:23.543702 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:39:23.543885 kubelet[2121]: E0912 17:39:23.543713 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:39:23.543970 kubelet[2121]: I0912 17:39:23.543728 2121 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:39:23.544128 kubelet[2121]: W0912 17:39:23.544116 2121 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:39:23.546296 kubelet[2121]: I0912 17:39:23.546281 2121 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:39:23.547781 kubelet[2121]: I0912 17:39:23.546398 2121 server.go:1289] "Started kubelet" Sep 12 17:39:23.547781 kubelet[2121]: I0912 17:39:23.546451 2121 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:23.547781 kubelet[2121]: I0912 17:39:23.547325 2121 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:39:23.547781 kubelet[2121]: I0912 17:39:23.547513 2121 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:23.550614 kubelet[2121]: I0912 17:39:23.549813 2121 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:23.550614 kubelet[2121]: I0912 17:39:23.550075 2121 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:39:23.550614 kubelet[2121]: E0912 17:39:23.550181 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:39:23.551731 kubelet[2121]: I0912 17:39:23.551712 2121 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:39:23.551870 kubelet[2121]: I0912 17:39:23.551859 2121 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:23.552276 kubelet[2121]: E0912 17:39:23.552250 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:39:23.552423 kubelet[2121]: E0912 17:39:23.552402 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="200ms" Sep 12 17:39:23.552775 kubelet[2121]: I0912 17:39:23.552726 2121 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:23.552877 kubelet[2121]: I0912 17:39:23.552728 2121 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:23.553052 kubelet[2121]: I0912 17:39:23.553028 2121 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:23.554711 kubelet[2121]: E0912 17:39:23.553262 2121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.153:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.153:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499b677530f65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:39:23.546365797 +0000 UTC m=+0.785833950,LastTimestamp:2025-09-12 17:39:23.546365797 +0000 UTC m=+0.785833950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:39:23.555584 kubelet[2121]: I0912 17:39:23.555560 2121 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:39:23.555584 kubelet[2121]: I0912 17:39:23.555578 2121 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:39:23.557237 kubelet[2121]: E0912 17:39:23.557212 2121 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:23.565385 kubelet[2121]: I0912 17:39:23.565370 2121 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:39:23.565473 kubelet[2121]: I0912 17:39:23.565462 2121 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:23.565572 kubelet[2121]: I0912 17:39:23.565534 2121 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:23.568649 kubelet[2121]: I0912 17:39:23.568607 2121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:23.569609 kubelet[2121]: I0912 17:39:23.569580 2121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:23.569609 kubelet[2121]: I0912 17:39:23.569610 2121 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:39:23.569697 kubelet[2121]: I0912 17:39:23.569633 2121 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:39:23.569697 kubelet[2121]: I0912 17:39:23.569645 2121 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:39:23.569697 kubelet[2121]: E0912 17:39:23.569686 2121 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:39:23.637947 kubelet[2121]: I0912 17:39:23.637692 2121 policy_none.go:49] "None policy: Start" Sep 12 17:39:23.637947 kubelet[2121]: I0912 17:39:23.637738 2121 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:39:23.637947 kubelet[2121]: I0912 17:39:23.637751 2121 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:23.638211 kubelet[2121]: E0912 17:39:23.638176 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:39:23.644053 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:39:23.655928 kubelet[2121]: E0912 17:39:23.650548 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:39:23.659211 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:39:23.670058 kubelet[2121]: E0912 17:39:23.670011 2121 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:39:23.671677 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:39:23.672828 kubelet[2121]: E0912 17:39:23.672800 2121 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:39:23.673294 kubelet[2121]: I0912 17:39:23.673096 2121 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:23.673294 kubelet[2121]: I0912 17:39:23.673122 2121 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:23.673379 kubelet[2121]: I0912 17:39:23.673360 2121 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:23.674214 kubelet[2121]: E0912 17:39:23.674161 2121 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:39:23.674214 kubelet[2121]: E0912 17:39:23.674199 2121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:39:23.752846 kubelet[2121]: E0912 17:39:23.752813 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="400ms" Sep 12 17:39:23.775888 kubelet[2121]: I0912 17:39:23.775802 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:39:23.776161 kubelet[2121]: E0912 17:39:23.776125 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Sep 12 17:39:23.881397 systemd[1]: Created slice kubepods-burstable-pod36342a81fd5bf77ac9ab17bdd78b7213.slice - libcontainer container kubepods-burstable-pod36342a81fd5bf77ac9ab17bdd78b7213.slice. Sep 12 17:39:23.898302 kubelet[2121]: E0912 17:39:23.898262 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:23.901789 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 17:39:23.912851 kubelet[2121]: E0912 17:39:23.912735 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:23.914998 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 17:39:23.916323 kubelet[2121]: E0912 17:39:23.916280 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:23.953771 kubelet[2121]: I0912 17:39:23.953727 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36342a81fd5bf77ac9ab17bdd78b7213-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36342a81fd5bf77ac9ab17bdd78b7213\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:23.953771 kubelet[2121]: I0912 17:39:23.953766 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36342a81fd5bf77ac9ab17bdd78b7213-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36342a81fd5bf77ac9ab17bdd78b7213\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:23.953883 kubelet[2121]: I0912 17:39:23.953785 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:23.953883 kubelet[2121]: I0912 17:39:23.953801 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:23.953883 kubelet[2121]: I0912 17:39:23.953815 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:39:23.953883 kubelet[2121]: I0912 17:39:23.953837 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36342a81fd5bf77ac9ab17bdd78b7213-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36342a81fd5bf77ac9ab17bdd78b7213\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:23.953883 kubelet[2121]: I0912 17:39:23.953851 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:23.953988 kubelet[2121]: I0912 17:39:23.953875 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:23.953988 kubelet[2121]: I0912 17:39:23.953893 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:23.978232 kubelet[2121]: I0912 17:39:23.978202 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:39:23.978533 kubelet[2121]: E0912 17:39:23.978511 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Sep 12 17:39:24.083002 kubelet[2121]: E0912 17:39:24.082825 2121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.153:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.153:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499b677530f65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:39:23.546365797 +0000 UTC m=+0.785833950,LastTimestamp:2025-09-12 17:39:23.546365797 +0000 UTC m=+0.785833950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:39:24.153644 kubelet[2121]: E0912 17:39:24.153595 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="800ms" Sep 12 17:39:24.199097 kubelet[2121]: E0912 17:39:24.199061 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:24.200169 containerd[1439]: time="2025-09-12T17:39:24.199850636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36342a81fd5bf77ac9ab17bdd78b7213,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:24.213891 kubelet[2121]: E0912 17:39:24.213863 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:24.215921 containerd[1439]: time="2025-09-12T17:39:24.215867825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:24.217074 kubelet[2121]: E0912 17:39:24.217052 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:24.217569 containerd[1439]: time="2025-09-12T17:39:24.217523251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:24.380123 kubelet[2121]: I0912 17:39:24.380028 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:39:24.380367 kubelet[2121]: E0912 17:39:24.380339 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Sep 12 17:39:24.559046 kubelet[2121]: E0912 17:39:24.559004 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:39:24.728729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168751291.mount: Deactivated successfully. Sep 12 17:39:24.736151 containerd[1439]: time="2025-09-12T17:39:24.735342384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:24.736985 containerd[1439]: time="2025-09-12T17:39:24.736960667Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:24.737571 containerd[1439]: time="2025-09-12T17:39:24.737534239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:24.738055 containerd[1439]: time="2025-09-12T17:39:24.738030367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:24.738771 containerd[1439]: time="2025-09-12T17:39:24.738745193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:24.740221 containerd[1439]: time="2025-09-12T17:39:24.740188198Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:24.740572 containerd[1439]: time="2025-09-12T17:39:24.740516284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 12 17:39:24.742733 containerd[1439]: time="2025-09-12T17:39:24.742705620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:24.744384 containerd[1439]: time="2025-09-12T17:39:24.744191366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.243058ms" Sep 12 17:39:24.747123 containerd[1439]: time="2025-09-12T17:39:24.747089970Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.477001ms" Sep 12 17:39:24.748053 containerd[1439]: time="2025-09-12T17:39:24.748008860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.072024ms" Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839882730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839925790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839935865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839999276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839732081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839833313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839849706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:24.840123 containerd[1439]: time="2025-09-12T17:39:24.839981084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:24.840523 containerd[1439]: time="2025-09-12T17:39:24.840459021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:24.840523 containerd[1439]: time="2025-09-12T17:39:24.840509117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:24.840618 containerd[1439]: time="2025-09-12T17:39:24.840524510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:24.842590 containerd[1439]: time="2025-09-12T17:39:24.842495748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:24.854235 kubelet[2121]: E0912 17:39:24.854172 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:39:24.860692 systemd[1]: Started cri-containerd-bf6ba4611e2cecf2c41b413f08294ccbaf20407006500ca3a060ad6d777330be.scope - libcontainer container bf6ba4611e2cecf2c41b413f08294ccbaf20407006500ca3a060ad6d777330be. Sep 12 17:39:24.864795 systemd[1]: Started cri-containerd-02eaef64cf8397a30f275116f356f16d9385bdddecb7a699204d96e886cc9182.scope - libcontainer container 02eaef64cf8397a30f275116f356f16d9385bdddecb7a699204d96e886cc9182. Sep 12 17:39:24.866187 systemd[1]: Started cri-containerd-ff58dcfabfacb6874e034a22ee919395df06cdf7a6e4f5ece642e56ec31a9673.scope - libcontainer container ff58dcfabfacb6874e034a22ee919395df06cdf7a6e4f5ece642e56ec31a9673. Sep 12 17:39:24.898385 containerd[1439]: time="2025-09-12T17:39:24.898345547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf6ba4611e2cecf2c41b413f08294ccbaf20407006500ca3a060ad6d777330be\"" Sep 12 17:39:24.901670 kubelet[2121]: E0912 17:39:24.900089 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:24.904492 containerd[1439]: time="2025-09-12T17:39:24.904462926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"02eaef64cf8397a30f275116f356f16d9385bdddecb7a699204d96e886cc9182\"" Sep 12 17:39:24.905121 kubelet[2121]: E0912 17:39:24.905102 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:24.905805 containerd[1439]: time="2025-09-12T17:39:24.905778590Z" level=info msg="CreateContainer within sandbox \"bf6ba4611e2cecf2c41b413f08294ccbaf20407006500ca3a060ad6d777330be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:39:24.908874 containerd[1439]: time="2025-09-12T17:39:24.908839039Z" level=info msg="CreateContainer within sandbox \"02eaef64cf8397a30f275116f356f16d9385bdddecb7a699204d96e886cc9182\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:39:24.909357 containerd[1439]: time="2025-09-12T17:39:24.909324092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36342a81fd5bf77ac9ab17bdd78b7213,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff58dcfabfacb6874e034a22ee919395df06cdf7a6e4f5ece642e56ec31a9673\"" Sep 12 17:39:24.910003 kubelet[2121]: E0912 17:39:24.909980 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:24.912860 containerd[1439]: time="2025-09-12T17:39:24.912833131Z" level=info msg="CreateContainer within sandbox \"ff58dcfabfacb6874e034a22ee919395df06cdf7a6e4f5ece642e56ec31a9673\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:39:24.922509 containerd[1439]: time="2025-09-12T17:39:24.922469664Z" level=info msg="CreateContainer within sandbox \"bf6ba4611e2cecf2c41b413f08294ccbaf20407006500ca3a060ad6d777330be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dbde62ec420c1f97b37bddbfd5495c5aaf0d329e00b1952f0e7dd6f6077dec68\"" Sep 12 17:39:24.923089 containerd[1439]: time="2025-09-12T17:39:24.923059068Z" level=info msg="StartContainer for \"dbde62ec420c1f97b37bddbfd5495c5aaf0d329e00b1952f0e7dd6f6077dec68\"" Sep 12 17:39:24.928440 containerd[1439]: time="2025-09-12T17:39:24.928405248Z" level=info msg="CreateContainer within sandbox \"02eaef64cf8397a30f275116f356f16d9385bdddecb7a699204d96e886cc9182\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26568ae402525b0159c72dddd8f190d2a1ac069d824f23d3b96d26db6f09c856\"" Sep 12 17:39:24.929512 containerd[1439]: time="2025-09-12T17:39:24.929292913Z" level=info msg="StartContainer for \"26568ae402525b0159c72dddd8f190d2a1ac069d824f23d3b96d26db6f09c856\"" Sep 12 17:39:24.935786 containerd[1439]: time="2025-09-12T17:39:24.935638225Z" level=info msg="CreateContainer within sandbox \"ff58dcfabfacb6874e034a22ee919395df06cdf7a6e4f5ece642e56ec31a9673\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b131844458fc4b785a42b85f8b807fb37d713a1385a2695d45b212c59b6e3e0\"" Sep 12 17:39:24.936247 containerd[1439]: time="2025-09-12T17:39:24.936219473Z" level=info msg="StartContainer for \"9b131844458fc4b785a42b85f8b807fb37d713a1385a2695d45b212c59b6e3e0\"" Sep 12 17:39:24.943967 kubelet[2121]: E0912 17:39:24.943933 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:39:24.950136 systemd[1]: Started cri-containerd-dbde62ec420c1f97b37bddbfd5495c5aaf0d329e00b1952f0e7dd6f6077dec68.scope - libcontainer container dbde62ec420c1f97b37bddbfd5495c5aaf0d329e00b1952f0e7dd6f6077dec68. Sep 12 17:39:24.954172 kubelet[2121]: E0912 17:39:24.954132 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="1.6s" Sep 12 17:39:24.959674 systemd[1]: Started cri-containerd-26568ae402525b0159c72dddd8f190d2a1ac069d824f23d3b96d26db6f09c856.scope - libcontainer container 26568ae402525b0159c72dddd8f190d2a1ac069d824f23d3b96d26db6f09c856. Sep 12 17:39:24.962950 systemd[1]: Started cri-containerd-9b131844458fc4b785a42b85f8b807fb37d713a1385a2695d45b212c59b6e3e0.scope - libcontainer container 9b131844458fc4b785a42b85f8b807fb37d713a1385a2695d45b212c59b6e3e0. Sep 12 17:39:24.992827 containerd[1439]: time="2025-09-12T17:39:24.992659556Z" level=info msg="StartContainer for \"dbde62ec420c1f97b37bddbfd5495c5aaf0d329e00b1952f0e7dd6f6077dec68\" returns successfully" Sep 12 17:39:25.002184 containerd[1439]: time="2025-09-12T17:39:25.002061167Z" level=info msg="StartContainer for \"26568ae402525b0159c72dddd8f190d2a1ac069d824f23d3b96d26db6f09c856\" returns successfully" Sep 12 17:39:25.003295 containerd[1439]: time="2025-09-12T17:39:25.003251520Z" level=info msg="StartContainer for \"9b131844458fc4b785a42b85f8b807fb37d713a1385a2695d45b212c59b6e3e0\" returns successfully" Sep 12 17:39:25.182327 kubelet[2121]: I0912 17:39:25.182298 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:39:25.578036 kubelet[2121]: E0912 17:39:25.577810 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:25.578036 kubelet[2121]: E0912 17:39:25.577927 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:25.581806 kubelet[2121]: E0912 17:39:25.581775 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:25.581894 kubelet[2121]: E0912 17:39:25.581883 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:25.582298 kubelet[2121]: E0912 17:39:25.582282 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:25.582416 kubelet[2121]: E0912 17:39:25.582401 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:26.585358 kubelet[2121]: E0912 17:39:26.585325 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:26.585685 kubelet[2121]: E0912 17:39:26.585446 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:26.585685 kubelet[2121]: E0912 17:39:26.585459 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:39:26.585685 kubelet[2121]: E0912 17:39:26.585589 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:27.105627 kubelet[2121]: E0912 17:39:27.105583 2121 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:39:27.163060 kubelet[2121]: I0912 17:39:27.163013 2121 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:39:27.163060 kubelet[2121]: E0912 17:39:27.163055 2121 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:39:27.252069 kubelet[2121]: I0912 17:39:27.251676 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:27.260089 kubelet[2121]: E0912 17:39:27.259989 2121 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:27.260089 kubelet[2121]: I0912 17:39:27.260069 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:39:27.263026 kubelet[2121]: E0912 17:39:27.262813 2121 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:39:27.263026 kubelet[2121]: I0912 17:39:27.262837 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:27.264581 kubelet[2121]: E0912 17:39:27.264555 2121 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:27.545080 kubelet[2121]: I0912 17:39:27.545045 2121 apiserver.go:52] "Watching apiserver" Sep 12 17:39:27.552295 kubelet[2121]: I0912 17:39:27.552254 2121 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:39:27.584752 kubelet[2121]: I0912 17:39:27.584725 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:27.586639 kubelet[2121]: E0912 17:39:27.586611 2121 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:27.586880 kubelet[2121]: E0912 17:39:27.586749 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:29.198680 systemd[1]: Reloading requested from client PID 2410 ('systemctl') (unit session-7.scope)... Sep 12 17:39:29.198695 systemd[1]: Reloading... Sep 12 17:39:29.260568 zram_generator::config[2449]: No configuration found. Sep 12 17:39:29.343924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:29.412590 systemd[1]: Reloading finished in 213 ms. Sep 12 17:39:29.445982 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:29.459920 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:39:29.460147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:29.460206 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 132.6M memory peak, 0B memory swap peak. Sep 12 17:39:29.469038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:29.570499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:29.574351 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:29.617575 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:29.617575 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:29.617575 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:29.617914 kubelet[2491]: I0912 17:39:29.617617 2491 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:29.622957 kubelet[2491]: I0912 17:39:29.622892 2491 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:39:29.622957 kubelet[2491]: I0912 17:39:29.622919 2491 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:29.624266 kubelet[2491]: I0912 17:39:29.623374 2491 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:39:29.625692 kubelet[2491]: I0912 17:39:29.625662 2491 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:39:29.627733 kubelet[2491]: I0912 17:39:29.627669 2491 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:29.631191 kubelet[2491]: E0912 17:39:29.631054 2491 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:29.631191 kubelet[2491]: I0912 17:39:29.631082 2491 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:29.633545 kubelet[2491]: I0912 17:39:29.633517 2491 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:29.633748 kubelet[2491]: I0912 17:39:29.633725 2491 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:29.633881 kubelet[2491]: I0912 17:39:29.633748 2491 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:39:29.633956 kubelet[2491]: I0912 17:39:29.633888 2491 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:29.633956 kubelet[2491]: I0912 17:39:29.633896 2491 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:39:29.633956 kubelet[2491]: I0912 17:39:29.633934 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:29.634082 kubelet[2491]: I0912 17:39:29.634069 2491 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:39:29.634111 kubelet[2491]: I0912 17:39:29.634089 2491 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:29.634111 kubelet[2491]: I0912 17:39:29.634110 2491 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:39:29.634150 kubelet[2491]: I0912 17:39:29.634121 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:29.637336 kubelet[2491]: I0912 17:39:29.635271 2491 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:29.637336 kubelet[2491]: I0912 17:39:29.635822 2491 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:39:29.640563 kubelet[2491]: I0912 17:39:29.640019 2491 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:39:29.640563 kubelet[2491]: I0912 17:39:29.640073 2491 server.go:1289] "Started kubelet" Sep 12 17:39:29.641722 kubelet[2491]: I0912 17:39:29.641694 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:29.644359 kubelet[2491]: I0912 17:39:29.644313 2491 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:29.645812 kubelet[2491]: I0912 17:39:29.645794 2491 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:39:29.648546 kubelet[2491]: E0912 17:39:29.646584 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:39:29.648546 kubelet[2491]: I0912 17:39:29.646688 2491 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:39:29.648546 kubelet[2491]: I0912 17:39:29.646139 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:29.648546 kubelet[2491]: I0912 17:39:29.646911 2491 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:29.648546 kubelet[2491]: I0912 17:39:29.647037 2491 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:29.649798 kubelet[2491]: I0912 17:39:29.649778 2491 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:39:29.651031 kubelet[2491]: I0912 17:39:29.650334 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:29.653584 kubelet[2491]: I0912 17:39:29.653551 2491 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:39:29.653655 kubelet[2491]: I0912 17:39:29.653644 2491 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:29.654467 kubelet[2491]: I0912 17:39:29.654446 2491 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:39:29.658525 kubelet[2491]: I0912 17:39:29.658478 2491 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:29.665107 kubelet[2491]: E0912 17:39:29.665075 2491 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:29.673146 kubelet[2491]: I0912 17:39:29.673103 2491 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:29.673146 kubelet[2491]: I0912 17:39:29.673138 2491 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:39:29.673245 kubelet[2491]: I0912 17:39:29.673157 2491 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:39:29.673245 kubelet[2491]: I0912 17:39:29.673163 2491 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:39:29.673245 kubelet[2491]: E0912 17:39:29.673211 2491 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698657 2491 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698674 2491 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698694 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698809 2491 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698817 2491 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698833 2491 policy_none.go:49] "None policy: Start" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698841 2491 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698849 2491 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:29.698966 kubelet[2491]: I0912 17:39:29.698924 2491 state_mem.go:75] "Updated machine memory state" Sep 12 17:39:29.702118 kubelet[2491]: E0912 17:39:29.702025 2491 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:39:29.702507 kubelet[2491]: I0912 17:39:29.702166 2491 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:29.702507 kubelet[2491]: I0912 17:39:29.702203 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:29.702507 kubelet[2491]: I0912 17:39:29.702378 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:29.703461 kubelet[2491]: E0912 17:39:29.703437 2491 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:39:29.775154 kubelet[2491]: I0912 17:39:29.774850 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:39:29.775154 kubelet[2491]: I0912 17:39:29.774877 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:29.775154 kubelet[2491]: I0912 17:39:29.774979 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:29.805650 kubelet[2491]: I0912 17:39:29.805488 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:39:29.813551 kubelet[2491]: I0912 17:39:29.813302 2491 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:39:29.813811 kubelet[2491]: I0912 17:39:29.813702 2491 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:39:29.947501 kubelet[2491]: I0912 17:39:29.947470 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:39:29.947501 kubelet[2491]: I0912 17:39:29.947505 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36342a81fd5bf77ac9ab17bdd78b7213-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36342a81fd5bf77ac9ab17bdd78b7213\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:29.947659 kubelet[2491]: I0912 17:39:29.947525 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:29.947659 kubelet[2491]: I0912 17:39:29.947555 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:29.947659 kubelet[2491]: I0912 17:39:29.947569 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36342a81fd5bf77ac9ab17bdd78b7213-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36342a81fd5bf77ac9ab17bdd78b7213\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:29.947659 kubelet[2491]: I0912 17:39:29.947586 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36342a81fd5bf77ac9ab17bdd78b7213-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36342a81fd5bf77ac9ab17bdd78b7213\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:39:29.947659 kubelet[2491]: I0912 17:39:29.947601 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:29.947764 kubelet[2491]: I0912 17:39:29.947617 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:29.947764 kubelet[2491]: I0912 17:39:29.947632 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:30.082652 kubelet[2491]: E0912 17:39:30.082495 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:30.084423 kubelet[2491]: E0912 17:39:30.083517 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:30.084423 kubelet[2491]: E0912 17:39:30.083681 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:30.635662 kubelet[2491]: I0912 17:39:30.635631 2491 apiserver.go:52] "Watching apiserver" Sep 12 17:39:30.646950 kubelet[2491]: I0912 17:39:30.646923 2491 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:39:30.685320 kubelet[2491]: E0912 17:39:30.685284 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:30.686090 kubelet[2491]: I0912 17:39:30.685933 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:39:30.686161 kubelet[2491]: I0912 17:39:30.686128 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:30.693177 kubelet[2491]: E0912 17:39:30.693145 2491 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:39:30.693465 kubelet[2491]: E0912 17:39:30.693441 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:30.693711 kubelet[2491]: E0912 17:39:30.693681 2491 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:39:30.694264 kubelet[2491]: E0912 17:39:30.693837 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:30.717861 kubelet[2491]: I0912 17:39:30.717812 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.717797201 podStartE2EDuration="1.717797201s" podCreationTimestamp="2025-09-12 17:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:30.709792598 +0000 UTC m=+1.131138767" watchObservedRunningTime="2025-09-12 17:39:30.717797201 +0000 UTC m=+1.139143370" Sep 12 17:39:30.726453 kubelet[2491]: I0912 17:39:30.726370 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7263584779999999 podStartE2EDuration="1.726358478s" podCreationTimestamp="2025-09-12 17:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:30.718087478 +0000 UTC m=+1.139433647" watchObservedRunningTime="2025-09-12 17:39:30.726358478 +0000 UTC m=+1.147704647" Sep 12 17:39:30.733356 kubelet[2491]: I0912 17:39:30.733314 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.733302452 podStartE2EDuration="1.733302452s" podCreationTimestamp="2025-09-12 17:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:30.726498917 +0000 UTC m=+1.147845086" watchObservedRunningTime="2025-09-12 17:39:30.733302452 +0000 UTC m=+1.154648621" Sep 12 17:39:31.686648 kubelet[2491]: E0912 17:39:31.686614 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:31.686969 kubelet[2491]: E0912 17:39:31.686715 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:31.687011 kubelet[2491]: E0912 17:39:31.686976 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:35.311385 kubelet[2491]: E0912 17:39:35.311352 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:35.533064 kubelet[2491]: I0912 17:39:35.533039 2491 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:39:35.533525 containerd[1439]: time="2025-09-12T17:39:35.533491263Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:39:35.533989 kubelet[2491]: I0912 17:39:35.533968 2491 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:39:35.691594 kubelet[2491]: E0912 17:39:35.691389 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:36.582633 systemd[1]: Created slice kubepods-besteffort-podba81ad45_ad65_42e8_acfb_35d7c422d9bd.slice - libcontainer container kubepods-besteffort-podba81ad45_ad65_42e8_acfb_35d7c422d9bd.slice. Sep 12 17:39:36.594260 kubelet[2491]: I0912 17:39:36.594230 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba81ad45-ad65-42e8-acfb-35d7c422d9bd-kube-proxy\") pod \"kube-proxy-8hzsh\" (UID: \"ba81ad45-ad65-42e8-acfb-35d7c422d9bd\") " pod="kube-system/kube-proxy-8hzsh" Sep 12 17:39:36.594260 kubelet[2491]: I0912 17:39:36.594261 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtrw\" (UniqueName: \"kubernetes.io/projected/ba81ad45-ad65-42e8-acfb-35d7c422d9bd-kube-api-access-xhtrw\") pod \"kube-proxy-8hzsh\" (UID: \"ba81ad45-ad65-42e8-acfb-35d7c422d9bd\") " pod="kube-system/kube-proxy-8hzsh" Sep 12 17:39:36.594622 kubelet[2491]: I0912 17:39:36.594282 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba81ad45-ad65-42e8-acfb-35d7c422d9bd-xtables-lock\") pod \"kube-proxy-8hzsh\" (UID: \"ba81ad45-ad65-42e8-acfb-35d7c422d9bd\") " pod="kube-system/kube-proxy-8hzsh" Sep 12 17:39:36.594622 kubelet[2491]: I0912 17:39:36.594297 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba81ad45-ad65-42e8-acfb-35d7c422d9bd-lib-modules\") pod \"kube-proxy-8hzsh\" (UID: \"ba81ad45-ad65-42e8-acfb-35d7c422d9bd\") " pod="kube-system/kube-proxy-8hzsh" Sep 12 17:39:36.692747 kubelet[2491]: E0912 17:39:36.692718 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:36.740271 systemd[1]: Created slice kubepods-besteffort-podccf8f9a6_8701_467e_9bc0_dffb3fc7a2a5.slice - libcontainer container kubepods-besteffort-podccf8f9a6_8701_467e_9bc0_dffb3fc7a2a5.slice. Sep 12 17:39:36.795336 kubelet[2491]: I0912 17:39:36.795284 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkplh\" (UniqueName: \"kubernetes.io/projected/ccf8f9a6-8701-467e-9bc0-dffb3fc7a2a5-kube-api-access-lkplh\") pod \"tigera-operator-755d956888-t76ld\" (UID: \"ccf8f9a6-8701-467e-9bc0-dffb3fc7a2a5\") " pod="tigera-operator/tigera-operator-755d956888-t76ld" Sep 12 17:39:36.795336 kubelet[2491]: I0912 17:39:36.795338 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ccf8f9a6-8701-467e-9bc0-dffb3fc7a2a5-var-lib-calico\") pod \"tigera-operator-755d956888-t76ld\" (UID: \"ccf8f9a6-8701-467e-9bc0-dffb3fc7a2a5\") " pod="tigera-operator/tigera-operator-755d956888-t76ld" Sep 12 17:39:36.893628 kubelet[2491]: E0912 17:39:36.893390 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:36.894264 containerd[1439]: time="2025-09-12T17:39:36.894003291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hzsh,Uid:ba81ad45-ad65-42e8-acfb-35d7c422d9bd,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:36.917830 containerd[1439]: time="2025-09-12T17:39:36.917729247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:36.917830 containerd[1439]: time="2025-09-12T17:39:36.917802966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:36.918174 containerd[1439]: time="2025-09-12T17:39:36.917850526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:36.918266 containerd[1439]: time="2025-09-12T17:39:36.918246483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:36.955743 systemd[1]: Started cri-containerd-8bc5f633d210bed1303008332915c2c0d3b16c20acb251be5f372c4d82da94e5.scope - libcontainer container 8bc5f633d210bed1303008332915c2c0d3b16c20acb251be5f372c4d82da94e5. Sep 12 17:39:36.980739 containerd[1439]: time="2025-09-12T17:39:36.980702130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hzsh,Uid:ba81ad45-ad65-42e8-acfb-35d7c422d9bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bc5f633d210bed1303008332915c2c0d3b16c20acb251be5f372c4d82da94e5\"" Sep 12 17:39:36.981699 kubelet[2491]: E0912 17:39:36.981678 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:36.992662 containerd[1439]: time="2025-09-12T17:39:36.992562808Z" level=info msg="CreateContainer within sandbox \"8bc5f633d210bed1303008332915c2c0d3b16c20acb251be5f372c4d82da94e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:39:37.006608 containerd[1439]: time="2025-09-12T17:39:37.006572993Z" level=info msg="CreateContainer within sandbox \"8bc5f633d210bed1303008332915c2c0d3b16c20acb251be5f372c4d82da94e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"efb9d7dc1dfcea59200a8f7e13eb01950722589569d40ba12fa38a689f0f2186\"" Sep 12 17:39:37.007680 containerd[1439]: time="2025-09-12T17:39:37.007066910Z" level=info msg="StartContainer for \"efb9d7dc1dfcea59200a8f7e13eb01950722589569d40ba12fa38a689f0f2186\"" Sep 12 17:39:37.030677 systemd[1]: Started cri-containerd-efb9d7dc1dfcea59200a8f7e13eb01950722589569d40ba12fa38a689f0f2186.scope - libcontainer container efb9d7dc1dfcea59200a8f7e13eb01950722589569d40ba12fa38a689f0f2186. Sep 12 17:39:37.043750 containerd[1439]: time="2025-09-12T17:39:37.043719828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-t76ld,Uid:ccf8f9a6-8701-467e-9bc0-dffb3fc7a2a5,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:39:37.051651 containerd[1439]: time="2025-09-12T17:39:37.051616656Z" level=info msg="StartContainer for \"efb9d7dc1dfcea59200a8f7e13eb01950722589569d40ba12fa38a689f0f2186\" returns successfully" Sep 12 17:39:37.062661 containerd[1439]: time="2025-09-12T17:39:37.061275113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:37.062661 containerd[1439]: time="2025-09-12T17:39:37.062407465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:37.062661 containerd[1439]: time="2025-09-12T17:39:37.062423585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:37.062972 containerd[1439]: time="2025-09-12T17:39:37.062938462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:37.079682 systemd[1]: Started cri-containerd-602093e26f7aea7fbac8f8bedc27f23c25ee4222ef31d112a3800a8449bf2959.scope - libcontainer container 602093e26f7aea7fbac8f8bedc27f23c25ee4222ef31d112a3800a8449bf2959. Sep 12 17:39:37.111728 containerd[1439]: time="2025-09-12T17:39:37.111689701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-t76ld,Uid:ccf8f9a6-8701-467e-9bc0-dffb3fc7a2a5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"602093e26f7aea7fbac8f8bedc27f23c25ee4222ef31d112a3800a8449bf2959\"" Sep 12 17:39:37.114620 containerd[1439]: time="2025-09-12T17:39:37.114224324Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:39:37.696272 kubelet[2491]: E0912 17:39:37.696230 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:37.705355 kubelet[2491]: I0912 17:39:37.705275 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8hzsh" podStartSLOduration=1.705263755 podStartE2EDuration="1.705263755s" podCreationTimestamp="2025-09-12 17:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:37.704949317 +0000 UTC m=+8.126295486" watchObservedRunningTime="2025-09-12 17:39:37.705263755 +0000 UTC m=+8.126609924" Sep 12 17:39:37.710263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379275276.mount: Deactivated successfully. Sep 12 17:39:38.288984 kubelet[2491]: E0912 17:39:38.288954 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:38.398565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767275496.mount: Deactivated successfully. Sep 12 17:39:38.699684 kubelet[2491]: E0912 17:39:38.699528 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:39.699962 kubelet[2491]: E0912 17:39:39.699934 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:40.136425 update_engine[1428]: I20250912 17:39:40.136379 1428 update_attempter.cc:509] Updating boot flags... Sep 12 17:39:40.157023 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2807) Sep 12 17:39:40.187601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2811) Sep 12 17:39:40.570848 kubelet[2491]: E0912 17:39:40.570534 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:42.996605 containerd[1439]: time="2025-09-12T17:39:42.996555634Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:42.997531 containerd[1439]: time="2025-09-12T17:39:42.997500749Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:39:42.998365 containerd[1439]: time="2025-09-12T17:39:42.998322945Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:43.000452 containerd[1439]: time="2025-09-12T17:39:43.000410294Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:43.001644 containerd[1439]: time="2025-09-12T17:39:43.001617088Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 5.887358524s" Sep 12 17:39:43.001710 containerd[1439]: time="2025-09-12T17:39:43.001649888Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:39:43.007633 containerd[1439]: time="2025-09-12T17:39:43.007603099Z" level=info msg="CreateContainer within sandbox \"602093e26f7aea7fbac8f8bedc27f23c25ee4222ef31d112a3800a8449bf2959\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:39:43.016043 containerd[1439]: time="2025-09-12T17:39:43.015987578Z" level=info msg="CreateContainer within sandbox \"602093e26f7aea7fbac8f8bedc27f23c25ee4222ef31d112a3800a8449bf2959\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2d1becfeb7996a0853373d59ee7b5ade7d5d7505a8ddb4eead87087c995b8064\"" Sep 12 17:39:43.016688 containerd[1439]: time="2025-09-12T17:39:43.016652695Z" level=info msg="StartContainer for \"2d1becfeb7996a0853373d59ee7b5ade7d5d7505a8ddb4eead87087c995b8064\"" Sep 12 17:39:43.046738 systemd[1]: Started cri-containerd-2d1becfeb7996a0853373d59ee7b5ade7d5d7505a8ddb4eead87087c995b8064.scope - libcontainer container 2d1becfeb7996a0853373d59ee7b5ade7d5d7505a8ddb4eead87087c995b8064. Sep 12 17:39:43.094568 containerd[1439]: time="2025-09-12T17:39:43.094489715Z" level=info msg="StartContainer for \"2d1becfeb7996a0853373d59ee7b5ade7d5d7505a8ddb4eead87087c995b8064\" returns successfully" Sep 12 17:39:43.719520 kubelet[2491]: I0912 17:39:43.719411 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-t76ld" podStartSLOduration=1.8283340049999999 podStartE2EDuration="7.719396869s" podCreationTimestamp="2025-09-12 17:39:36 +0000 UTC" firstStartedPulling="2025-09-12 17:39:37.113057092 +0000 UTC m=+7.534403221" lastFinishedPulling="2025-09-12 17:39:43.004119916 +0000 UTC m=+13.425466085" observedRunningTime="2025-09-12 17:39:43.718629073 +0000 UTC m=+14.139975242" watchObservedRunningTime="2025-09-12 17:39:43.719396869 +0000 UTC m=+14.140743038" Sep 12 17:39:48.120674 sudo[1619]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:48.143579 sshd[1616]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:48.147361 systemd[1]: sshd@6-10.0.0.153:22-10.0.0.1:44510.service: Deactivated successfully. Sep 12 17:39:48.149433 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:39:48.149729 systemd[1]: session-7.scope: Consumed 9.475s CPU time, 153.3M memory peak, 0B memory swap peak. Sep 12 17:39:48.151463 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:39:48.153821 systemd-logind[1420]: Removed session 7. Sep 12 17:39:52.729382 systemd[1]: Created slice kubepods-besteffort-podeda8a71d_1671_471e_a07a_735c88f00784.slice - libcontainer container kubepods-besteffort-podeda8a71d_1671_471e_a07a_735c88f00784.slice. Sep 12 17:39:52.798106 kubelet[2491]: I0912 17:39:52.797974 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eda8a71d-1671-471e-a07a-735c88f00784-typha-certs\") pod \"calico-typha-74cd4d7cdd-2bnjs\" (UID: \"eda8a71d-1671-471e-a07a-735c88f00784\") " pod="calico-system/calico-typha-74cd4d7cdd-2bnjs" Sep 12 17:39:52.798106 kubelet[2491]: I0912 17:39:52.798016 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eda8a71d-1671-471e-a07a-735c88f00784-tigera-ca-bundle\") pod \"calico-typha-74cd4d7cdd-2bnjs\" (UID: \"eda8a71d-1671-471e-a07a-735c88f00784\") " pod="calico-system/calico-typha-74cd4d7cdd-2bnjs" Sep 12 17:39:52.798106 kubelet[2491]: I0912 17:39:52.798039 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjrb\" (UniqueName: \"kubernetes.io/projected/eda8a71d-1671-471e-a07a-735c88f00784-kube-api-access-jvjrb\") pod \"calico-typha-74cd4d7cdd-2bnjs\" (UID: \"eda8a71d-1671-471e-a07a-735c88f00784\") " pod="calico-system/calico-typha-74cd4d7cdd-2bnjs" Sep 12 17:39:53.004940 systemd[1]: Created slice kubepods-besteffort-podebd23487_7829_46f6_8dd3_c4cddd3d0d4c.slice - libcontainer container kubepods-besteffort-podebd23487_7829_46f6_8dd3_c4cddd3d0d4c.slice. Sep 12 17:39:53.046692 kubelet[2491]: E0912 17:39:53.046655 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:53.047528 containerd[1439]: time="2025-09-12T17:39:53.047194668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cd4d7cdd-2bnjs,Uid:eda8a71d-1671-471e-a07a-735c88f00784,Namespace:calico-system,Attempt:0,}" Sep 12 17:39:53.071679 containerd[1439]: time="2025-09-12T17:39:53.071462591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:53.071679 containerd[1439]: time="2025-09-12T17:39:53.071509511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:53.071679 containerd[1439]: time="2025-09-12T17:39:53.071521151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:53.071679 containerd[1439]: time="2025-09-12T17:39:53.071625271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:53.093726 systemd[1]: Started cri-containerd-09a1341ab044ac16c5f234e6d05e7ab7ad39487131ad710d84fa94c7a45cb5f6.scope - libcontainer container 09a1341ab044ac16c5f234e6d05e7ab7ad39487131ad710d84fa94c7a45cb5f6. Sep 12 17:39:53.099544 kubelet[2491]: I0912 17:39:53.099502 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-cni-net-dir\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099544 kubelet[2491]: I0912 17:39:53.099550 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-policysync\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099677 kubelet[2491]: I0912 17:39:53.099571 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xp4h\" (UniqueName: \"kubernetes.io/projected/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-kube-api-access-4xp4h\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099677 kubelet[2491]: I0912 17:39:53.099587 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-node-certs\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099677 kubelet[2491]: I0912 17:39:53.099603 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-tigera-ca-bundle\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099677 kubelet[2491]: I0912 17:39:53.099619 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-var-run-calico\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099677 kubelet[2491]: I0912 17:39:53.099637 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-xtables-lock\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099789 kubelet[2491]: I0912 17:39:53.099654 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-cni-bin-dir\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099789 kubelet[2491]: I0912 17:39:53.099669 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-flexvol-driver-host\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099789 kubelet[2491]: I0912 17:39:53.099683 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-lib-modules\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099789 kubelet[2491]: I0912 17:39:53.099698 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-cni-log-dir\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.099789 kubelet[2491]: I0912 17:39:53.099712 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ebd23487-7829-46f6-8dd3-c4cddd3d0d4c-var-lib-calico\") pod \"calico-node-xfg5x\" (UID: \"ebd23487-7829-46f6-8dd3-c4cddd3d0d4c\") " pod="calico-system/calico-node-xfg5x" Sep 12 17:39:53.122721 containerd[1439]: time="2025-09-12T17:39:53.122611790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cd4d7cdd-2bnjs,Uid:eda8a71d-1671-471e-a07a-735c88f00784,Namespace:calico-system,Attempt:0,} returns sandbox id \"09a1341ab044ac16c5f234e6d05e7ab7ad39487131ad710d84fa94c7a45cb5f6\"" Sep 12 17:39:53.123287 kubelet[2491]: E0912 17:39:53.123265 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:53.124413 containerd[1439]: time="2025-09-12T17:39:53.124211745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:39:53.238227 kubelet[2491]: E0912 17:39:53.238078 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pms6" podUID="0e93698b-a189-45ca-894b-4585a51c5842" Sep 12 17:39:53.286107 kubelet[2491]: E0912 17:39:53.285844 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.286107 kubelet[2491]: W0912 17:39:53.285965 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.286621 kubelet[2491]: E0912 17:39:53.285994 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.286958 kubelet[2491]: E0912 17:39:53.286926 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.296735 kubelet[2491]: W0912 17:39:53.286944 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.296829 kubelet[2491]: E0912 17:39:53.296742 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.297003 kubelet[2491]: E0912 17:39:53.296972 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.297003 kubelet[2491]: W0912 17:39:53.296987 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.297003 kubelet[2491]: E0912 17:39:53.296998 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.299554 kubelet[2491]: E0912 17:39:53.298552 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.299554 kubelet[2491]: W0912 17:39:53.298571 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.299554 kubelet[2491]: E0912 17:39:53.298583 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.299554 kubelet[2491]: E0912 17:39:53.298778 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.299554 kubelet[2491]: W0912 17:39:53.298786 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.299554 kubelet[2491]: E0912 17:39:53.298798 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.300860 kubelet[2491]: E0912 17:39:53.300823 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.300860 kubelet[2491]: W0912 17:39:53.300840 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.300860 kubelet[2491]: E0912 17:39:53.300852 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.301051 kubelet[2491]: E0912 17:39:53.301030 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.301051 kubelet[2491]: W0912 17:39:53.301044 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.301051 kubelet[2491]: E0912 17:39:53.301052 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.301249 kubelet[2491]: E0912 17:39:53.301227 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.301249 kubelet[2491]: W0912 17:39:53.301239 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.301249 kubelet[2491]: E0912 17:39:53.301247 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.301427 kubelet[2491]: E0912 17:39:53.301407 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.301427 kubelet[2491]: W0912 17:39:53.301420 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.301427 kubelet[2491]: E0912 17:39:53.301428 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.301771 kubelet[2491]: E0912 17:39:53.301659 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.301771 kubelet[2491]: W0912 17:39:53.301673 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.301771 kubelet[2491]: E0912 17:39:53.301682 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.302009 kubelet[2491]: E0912 17:39:53.301842 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.302009 kubelet[2491]: W0912 17:39:53.301851 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.302009 kubelet[2491]: E0912 17:39:53.301900 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.302424 kubelet[2491]: E0912 17:39:53.302097 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.302424 kubelet[2491]: W0912 17:39:53.302109 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.302424 kubelet[2491]: E0912 17:39:53.302119 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.302424 kubelet[2491]: E0912 17:39:53.302381 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.302424 kubelet[2491]: W0912 17:39:53.302392 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.302424 kubelet[2491]: E0912 17:39:53.302402 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.303970 kubelet[2491]: E0912 17:39:53.303736 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.303970 kubelet[2491]: W0912 17:39:53.303756 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.303970 kubelet[2491]: E0912 17:39:53.303769 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.304093 kubelet[2491]: E0912 17:39:53.304050 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.304093 kubelet[2491]: W0912 17:39:53.304061 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.304093 kubelet[2491]: E0912 17:39:53.304071 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.304546 kubelet[2491]: E0912 17:39:53.304295 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.304546 kubelet[2491]: W0912 17:39:53.304310 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.304546 kubelet[2491]: E0912 17:39:53.304321 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.305844 kubelet[2491]: E0912 17:39:53.305692 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.305844 kubelet[2491]: W0912 17:39:53.305710 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.305844 kubelet[2491]: E0912 17:39:53.305722 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.307445 kubelet[2491]: E0912 17:39:53.307327 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.307445 kubelet[2491]: W0912 17:39:53.307349 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.307445 kubelet[2491]: E0912 17:39:53.307361 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.307583 kubelet[2491]: E0912 17:39:53.307574 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.307583 kubelet[2491]: W0912 17:39:53.307583 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.307626 kubelet[2491]: E0912 17:39:53.307591 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.307870 kubelet[2491]: E0912 17:39:53.307855 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.307870 kubelet[2491]: W0912 17:39:53.307865 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.307870 kubelet[2491]: E0912 17:39:53.307872 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.308499 containerd[1439]: time="2025-09-12T17:39:53.308457603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xfg5x,Uid:ebd23487-7829-46f6-8dd3-c4cddd3d0d4c,Namespace:calico-system,Attempt:0,}" Sep 12 17:39:53.308931 kubelet[2491]: E0912 17:39:53.308911 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.308931 kubelet[2491]: W0912 17:39:53.308927 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.309009 kubelet[2491]: E0912 17:39:53.308939 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.309009 kubelet[2491]: I0912 17:39:53.308965 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0e93698b-a189-45ca-894b-4585a51c5842-varrun\") pod \"csi-node-driver-7pms6\" (UID: \"0e93698b-a189-45ca-894b-4585a51c5842\") " pod="calico-system/csi-node-driver-7pms6" Sep 12 17:39:53.309200 kubelet[2491]: E0912 17:39:53.309181 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.309200 kubelet[2491]: W0912 17:39:53.309195 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.309271 kubelet[2491]: E0912 17:39:53.309206 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.309271 kubelet[2491]: I0912 17:39:53.309226 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8f2m\" (UniqueName: \"kubernetes.io/projected/0e93698b-a189-45ca-894b-4585a51c5842-kube-api-access-r8f2m\") pod \"csi-node-driver-7pms6\" (UID: \"0e93698b-a189-45ca-894b-4585a51c5842\") " pod="calico-system/csi-node-driver-7pms6" Sep 12 17:39:53.309492 kubelet[2491]: E0912 17:39:53.309472 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.309613 kubelet[2491]: W0912 17:39:53.309488 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.309613 kubelet[2491]: E0912 17:39:53.309512 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.309812 kubelet[2491]: E0912 17:39:53.309793 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.309812 kubelet[2491]: W0912 17:39:53.309807 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.309879 kubelet[2491]: E0912 17:39:53.309816 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.310764 kubelet[2491]: E0912 17:39:53.310737 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.310764 kubelet[2491]: W0912 17:39:53.310752 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.310764 kubelet[2491]: E0912 17:39:53.310764 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.310878 kubelet[2491]: I0912 17:39:53.310800 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e93698b-a189-45ca-894b-4585a51c5842-registration-dir\") pod \"csi-node-driver-7pms6\" (UID: \"0e93698b-a189-45ca-894b-4585a51c5842\") " pod="calico-system/csi-node-driver-7pms6" Sep 12 17:39:53.311071 kubelet[2491]: E0912 17:39:53.311045 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.311071 kubelet[2491]: W0912 17:39:53.311059 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.311071 kubelet[2491]: E0912 17:39:53.311069 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.311169 kubelet[2491]: I0912 17:39:53.311110 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e93698b-a189-45ca-894b-4585a51c5842-kubelet-dir\") pod \"csi-node-driver-7pms6\" (UID: \"0e93698b-a189-45ca-894b-4585a51c5842\") " pod="calico-system/csi-node-driver-7pms6" Sep 12 17:39:53.311339 kubelet[2491]: E0912 17:39:53.311312 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.311339 kubelet[2491]: W0912 17:39:53.311325 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.311339 kubelet[2491]: E0912 17:39:53.311335 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.311604 kubelet[2491]: E0912 17:39:53.311587 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.311652 kubelet[2491]: W0912 17:39:53.311602 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.311652 kubelet[2491]: E0912 17:39:53.311631 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.311900 kubelet[2491]: E0912 17:39:53.311875 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.311900 kubelet[2491]: W0912 17:39:53.311888 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.311900 kubelet[2491]: E0912 17:39:53.311897 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.312107 kubelet[2491]: E0912 17:39:53.312078 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.312107 kubelet[2491]: W0912 17:39:53.312090 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.312187 kubelet[2491]: E0912 17:39:53.312111 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.312187 kubelet[2491]: I0912 17:39:53.312132 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e93698b-a189-45ca-894b-4585a51c5842-socket-dir\") pod \"csi-node-driver-7pms6\" (UID: \"0e93698b-a189-45ca-894b-4585a51c5842\") " pod="calico-system/csi-node-driver-7pms6" Sep 12 17:39:53.312662 kubelet[2491]: E0912 17:39:53.312378 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.312662 kubelet[2491]: W0912 17:39:53.312394 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.312662 kubelet[2491]: E0912 17:39:53.312404 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.316567 kubelet[2491]: E0912 17:39:53.316532 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.316567 kubelet[2491]: W0912 17:39:53.316557 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.316567 kubelet[2491]: E0912 17:39:53.316569 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.317283 kubelet[2491]: E0912 17:39:53.316984 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.317283 kubelet[2491]: W0912 17:39:53.316999 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.317283 kubelet[2491]: E0912 17:39:53.317009 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.317407 kubelet[2491]: E0912 17:39:53.317338 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.317407 kubelet[2491]: W0912 17:39:53.317346 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.317407 kubelet[2491]: E0912 17:39:53.317354 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.317869 kubelet[2491]: E0912 17:39:53.317831 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.317869 kubelet[2491]: W0912 17:39:53.317851 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.317869 kubelet[2491]: E0912 17:39:53.317862 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.336377 containerd[1439]: time="2025-09-12T17:39:53.335955916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:53.336377 containerd[1439]: time="2025-09-12T17:39:53.336362314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:53.336377 containerd[1439]: time="2025-09-12T17:39:53.336375634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:53.336721 containerd[1439]: time="2025-09-12T17:39:53.336502594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:53.357698 systemd[1]: Started cri-containerd-7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408.scope - libcontainer container 7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408. Sep 12 17:39:53.388933 containerd[1439]: time="2025-09-12T17:39:53.388892789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xfg5x,Uid:ebd23487-7829-46f6-8dd3-c4cddd3d0d4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408\"" Sep 12 17:39:53.413381 kubelet[2491]: E0912 17:39:53.413338 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.413381 kubelet[2491]: W0912 17:39:53.413362 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.413381 kubelet[2491]: E0912 17:39:53.413381 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.413657 kubelet[2491]: E0912 17:39:53.413626 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.413657 kubelet[2491]: W0912 17:39:53.413635 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.413657 kubelet[2491]: E0912 17:39:53.413644 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.413998 kubelet[2491]: E0912 17:39:53.413983 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.413998 kubelet[2491]: W0912 17:39:53.413999 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.414064 kubelet[2491]: E0912 17:39:53.414012 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.415261 kubelet[2491]: E0912 17:39:53.414344 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.415261 kubelet[2491]: W0912 17:39:53.414358 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.415261 kubelet[2491]: E0912 17:39:53.414376 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.415261 kubelet[2491]: E0912 17:39:53.414575 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.415261 kubelet[2491]: W0912 17:39:53.414583 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.415261 kubelet[2491]: E0912 17:39:53.414608 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.415261 kubelet[2491]: E0912 17:39:53.414818 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.415261 kubelet[2491]: W0912 17:39:53.414832 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.415261 kubelet[2491]: E0912 17:39:53.414844 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.415261 kubelet[2491]: E0912 17:39:53.415015 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.415515 kubelet[2491]: W0912 17:39:53.415023 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.415515 kubelet[2491]: E0912 17:39:53.415030 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.415515 kubelet[2491]: E0912 17:39:53.415188 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.415515 kubelet[2491]: W0912 17:39:53.415195 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.415515 kubelet[2491]: E0912 17:39:53.415203 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.416056 kubelet[2491]: E0912 17:39:53.415432 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.416056 kubelet[2491]: W0912 17:39:53.416049 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.416132 kubelet[2491]: E0912 17:39:53.416067 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.416508 kubelet[2491]: E0912 17:39:53.416287 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.416508 kubelet[2491]: W0912 17:39:53.416299 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.416508 kubelet[2491]: E0912 17:39:53.416309 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.416508 kubelet[2491]: E0912 17:39:53.416465 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.416508 kubelet[2491]: W0912 17:39:53.416472 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.416508 kubelet[2491]: E0912 17:39:53.416479 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.416692 kubelet[2491]: E0912 17:39:53.416631 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.416692 kubelet[2491]: W0912 17:39:53.416643 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.416692 kubelet[2491]: E0912 17:39:53.416650 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.423643 kubelet[2491]: E0912 17:39:53.423624 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.423643 kubelet[2491]: W0912 17:39:53.423643 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.423750 kubelet[2491]: E0912 17:39:53.423654 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.423976 kubelet[2491]: E0912 17:39:53.423872 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.423976 kubelet[2491]: W0912 17:39:53.423883 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.423976 kubelet[2491]: E0912 17:39:53.423891 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.424061 kubelet[2491]: E0912 17:39:53.424042 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.424061 kubelet[2491]: W0912 17:39:53.424049 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.424061 kubelet[2491]: E0912 17:39:53.424056 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.424633 kubelet[2491]: E0912 17:39:53.424226 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.424633 kubelet[2491]: W0912 17:39:53.424236 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.424633 kubelet[2491]: E0912 17:39:53.424244 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.424633 kubelet[2491]: E0912 17:39:53.424376 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.424633 kubelet[2491]: W0912 17:39:53.424382 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.424633 kubelet[2491]: E0912 17:39:53.424389 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.424633 kubelet[2491]: E0912 17:39:53.424511 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.424633 kubelet[2491]: W0912 17:39:53.424517 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.424633 kubelet[2491]: E0912 17:39:53.424523 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.425781 kubelet[2491]: E0912 17:39:53.424690 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.425781 kubelet[2491]: W0912 17:39:53.424697 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.425781 kubelet[2491]: E0912 17:39:53.424704 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.426321 kubelet[2491]: E0912 17:39:53.425989 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.426321 kubelet[2491]: W0912 17:39:53.426004 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.426321 kubelet[2491]: E0912 17:39:53.426016 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.426778 kubelet[2491]: E0912 17:39:53.426580 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.426778 kubelet[2491]: W0912 17:39:53.426594 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.426778 kubelet[2491]: E0912 17:39:53.426606 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.429632 kubelet[2491]: E0912 17:39:53.429609 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.429632 kubelet[2491]: W0912 17:39:53.429628 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.429725 kubelet[2491]: E0912 17:39:53.429640 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.430589 kubelet[2491]: E0912 17:39:53.430085 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.430589 kubelet[2491]: W0912 17:39:53.430097 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.430589 kubelet[2491]: E0912 17:39:53.430105 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.430589 kubelet[2491]: E0912 17:39:53.430455 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.430589 kubelet[2491]: W0912 17:39:53.430468 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.430589 kubelet[2491]: E0912 17:39:53.430477 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.431235 kubelet[2491]: E0912 17:39:53.431127 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.431235 kubelet[2491]: W0912 17:39:53.431153 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.431235 kubelet[2491]: E0912 17:39:53.431167 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.438824 kubelet[2491]: E0912 17:39:53.438799 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.438824 kubelet[2491]: W0912 17:39:53.438818 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.438914 kubelet[2491]: E0912 17:39:53.438834 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.162229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213025120.mount: Deactivated successfully. Sep 12 17:39:54.674753 kubelet[2491]: E0912 17:39:54.674552 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pms6" podUID="0e93698b-a189-45ca-894b-4585a51c5842" Sep 12 17:39:55.062852 containerd[1439]: time="2025-09-12T17:39:55.062803793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:55.063709 containerd[1439]: time="2025-09-12T17:39:55.063563871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:39:55.064566 containerd[1439]: time="2025-09-12T17:39:55.064490548Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:55.066468 containerd[1439]: time="2025-09-12T17:39:55.066415063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:55.067323 containerd[1439]: time="2025-09-12T17:39:55.067274500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.943026155s" Sep 12 17:39:55.067323 containerd[1439]: time="2025-09-12T17:39:55.067315060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:39:55.068449 containerd[1439]: time="2025-09-12T17:39:55.068281897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:39:55.081504 containerd[1439]: time="2025-09-12T17:39:55.081379179Z" level=info msg="CreateContainer within sandbox \"09a1341ab044ac16c5f234e6d05e7ab7ad39487131ad710d84fa94c7a45cb5f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:39:55.095340 containerd[1439]: time="2025-09-12T17:39:55.095305538Z" level=info msg="CreateContainer within sandbox \"09a1341ab044ac16c5f234e6d05e7ab7ad39487131ad710d84fa94c7a45cb5f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1e80aeb03677a6049b7589672ecf98aad16d7def4092af908169d895ea1b6202\"" Sep 12 17:39:55.096734 containerd[1439]: time="2025-09-12T17:39:55.096685734Z" level=info msg="StartContainer for \"1e80aeb03677a6049b7589672ecf98aad16d7def4092af908169d895ea1b6202\"" Sep 12 17:39:55.135703 systemd[1]: Started cri-containerd-1e80aeb03677a6049b7589672ecf98aad16d7def4092af908169d895ea1b6202.scope - libcontainer container 1e80aeb03677a6049b7589672ecf98aad16d7def4092af908169d895ea1b6202. Sep 12 17:39:55.166513 containerd[1439]: time="2025-09-12T17:39:55.166425530Z" level=info msg="StartContainer for \"1e80aeb03677a6049b7589672ecf98aad16d7def4092af908169d895ea1b6202\" returns successfully" Sep 12 17:39:55.738678 kubelet[2491]: E0912 17:39:55.738639 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:55.825369 kubelet[2491]: E0912 17:39:55.825332 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.825369 kubelet[2491]: W0912 17:39:55.825353 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.825369 kubelet[2491]: E0912 17:39:55.825370 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.825605 kubelet[2491]: E0912 17:39:55.825559 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.825605 kubelet[2491]: W0912 17:39:55.825565 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.825605 kubelet[2491]: E0912 17:39:55.825603 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.825756 kubelet[2491]: E0912 17:39:55.825745 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.825756 kubelet[2491]: W0912 17:39:55.825754 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.825809 kubelet[2491]: E0912 17:39:55.825762 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.825894 kubelet[2491]: E0912 17:39:55.825884 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.825894 kubelet[2491]: W0912 17:39:55.825893 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.825939 kubelet[2491]: E0912 17:39:55.825899 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.826035 kubelet[2491]: E0912 17:39:55.826025 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.826035 kubelet[2491]: W0912 17:39:55.826034 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.826078 kubelet[2491]: E0912 17:39:55.826041 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.826191 kubelet[2491]: E0912 17:39:55.826181 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.826191 kubelet[2491]: W0912 17:39:55.826190 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.826237 kubelet[2491]: E0912 17:39:55.826198 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.826330 kubelet[2491]: E0912 17:39:55.826320 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.826353 kubelet[2491]: W0912 17:39:55.826330 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.826353 kubelet[2491]: E0912 17:39:55.826337 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.826466 kubelet[2491]: E0912 17:39:55.826457 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.826466 kubelet[2491]: W0912 17:39:55.826466 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.826519 kubelet[2491]: E0912 17:39:55.826472 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.826618 kubelet[2491]: E0912 17:39:55.826608 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.826618 kubelet[2491]: W0912 17:39:55.826617 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.826666 kubelet[2491]: E0912 17:39:55.826627 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.826756 kubelet[2491]: E0912 17:39:55.826746 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.826756 kubelet[2491]: W0912 17:39:55.826754 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.826802 kubelet[2491]: E0912 17:39:55.826761 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.826903 kubelet[2491]: E0912 17:39:55.826892 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.826931 kubelet[2491]: W0912 17:39:55.826903 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.826931 kubelet[2491]: E0912 17:39:55.826911 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.827055 kubelet[2491]: E0912 17:39:55.827045 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.827079 kubelet[2491]: W0912 17:39:55.827054 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.827079 kubelet[2491]: E0912 17:39:55.827061 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.827208 kubelet[2491]: E0912 17:39:55.827198 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.827208 kubelet[2491]: W0912 17:39:55.827208 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.827257 kubelet[2491]: E0912 17:39:55.827214 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.827356 kubelet[2491]: E0912 17:39:55.827347 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.827356 kubelet[2491]: W0912 17:39:55.827355 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.827400 kubelet[2491]: E0912 17:39:55.827362 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.827483 kubelet[2491]: E0912 17:39:55.827475 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.827507 kubelet[2491]: W0912 17:39:55.827483 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.827507 kubelet[2491]: E0912 17:39:55.827489 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.832950 kubelet[2491]: E0912 17:39:55.832914 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.832950 kubelet[2491]: W0912 17:39:55.832931 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.832950 kubelet[2491]: E0912 17:39:55.832943 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.833177 kubelet[2491]: E0912 17:39:55.833150 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.833177 kubelet[2491]: W0912 17:39:55.833160 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.833177 kubelet[2491]: E0912 17:39:55.833168 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.833394 kubelet[2491]: E0912 17:39:55.833369 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.833394 kubelet[2491]: W0912 17:39:55.833386 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.833443 kubelet[2491]: E0912 17:39:55.833399 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.833567 kubelet[2491]: E0912 17:39:55.833556 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.833567 kubelet[2491]: W0912 17:39:55.833566 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.833619 kubelet[2491]: E0912 17:39:55.833573 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.833772 kubelet[2491]: E0912 17:39:55.833753 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.833772 kubelet[2491]: W0912 17:39:55.833763 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.833772 kubelet[2491]: E0912 17:39:55.833770 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.833973 kubelet[2491]: E0912 17:39:55.833961 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.833973 kubelet[2491]: W0912 17:39:55.833973 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.834039 kubelet[2491]: E0912 17:39:55.833979 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.834251 kubelet[2491]: E0912 17:39:55.834222 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.834251 kubelet[2491]: W0912 17:39:55.834239 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.834251 kubelet[2491]: E0912 17:39:55.834251 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.834438 kubelet[2491]: E0912 17:39:55.834423 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.834462 kubelet[2491]: W0912 17:39:55.834437 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.834462 kubelet[2491]: E0912 17:39:55.834445 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.834596 kubelet[2491]: E0912 17:39:55.834587 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.834596 kubelet[2491]: W0912 17:39:55.834596 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.834644 kubelet[2491]: E0912 17:39:55.834605 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.834744 kubelet[2491]: E0912 17:39:55.834734 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.834744 kubelet[2491]: W0912 17:39:55.834743 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.834789 kubelet[2491]: E0912 17:39:55.834749 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.834897 kubelet[2491]: E0912 17:39:55.834888 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.834897 kubelet[2491]: W0912 17:39:55.834896 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.834942 kubelet[2491]: E0912 17:39:55.834905 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.835165 kubelet[2491]: E0912 17:39:55.835152 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.835192 kubelet[2491]: W0912 17:39:55.835166 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.835192 kubelet[2491]: E0912 17:39:55.835176 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.835386 kubelet[2491]: E0912 17:39:55.835375 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.835410 kubelet[2491]: W0912 17:39:55.835385 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.835410 kubelet[2491]: E0912 17:39:55.835393 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.835564 kubelet[2491]: E0912 17:39:55.835554 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.835587 kubelet[2491]: W0912 17:39:55.835563 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.835587 kubelet[2491]: E0912 17:39:55.835571 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.835730 kubelet[2491]: E0912 17:39:55.835705 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.835730 kubelet[2491]: W0912 17:39:55.835715 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.835730 kubelet[2491]: E0912 17:39:55.835723 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.835890 kubelet[2491]: E0912 17:39:55.835879 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.835890 kubelet[2491]: W0912 17:39:55.835888 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.835937 kubelet[2491]: E0912 17:39:55.835895 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.836167 kubelet[2491]: E0912 17:39:55.836152 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.836192 kubelet[2491]: W0912 17:39:55.836166 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.836192 kubelet[2491]: E0912 17:39:55.836177 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:55.836349 kubelet[2491]: E0912 17:39:55.836339 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:55.836378 kubelet[2491]: W0912 17:39:55.836349 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:55.836378 kubelet[2491]: E0912 17:39:55.836356 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:56.076631 containerd[1439]: time="2025-09-12T17:39:56.075663074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:56.079947 containerd[1439]: time="2025-09-12T17:39:56.079611423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:39:56.083465 containerd[1439]: time="2025-09-12T17:39:56.083209052Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:56.087729 containerd[1439]: time="2025-09-12T17:39:56.087680720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:56.088379 containerd[1439]: time="2025-09-12T17:39:56.088330078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.020015661s" Sep 12 17:39:56.088379 containerd[1439]: time="2025-09-12T17:39:56.088370878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:39:56.093662 containerd[1439]: time="2025-09-12T17:39:56.093613343Z" level=info msg="CreateContainer within sandbox \"7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:39:56.116448 containerd[1439]: time="2025-09-12T17:39:56.116384319Z" level=info msg="CreateContainer within sandbox \"7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56\"" Sep 12 17:39:56.117349 containerd[1439]: time="2025-09-12T17:39:56.117324996Z" level=info msg="StartContainer for \"782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56\"" Sep 12 17:39:56.152705 systemd[1]: Started cri-containerd-782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56.scope - libcontainer container 782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56. Sep 12 17:39:56.176406 containerd[1439]: time="2025-09-12T17:39:56.176363989Z" level=info msg="StartContainer for \"782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56\" returns successfully" Sep 12 17:39:56.190354 systemd[1]: cri-containerd-782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56.scope: Deactivated successfully. Sep 12 17:39:56.221810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56-rootfs.mount: Deactivated successfully. Sep 12 17:39:56.303529 containerd[1439]: time="2025-09-12T17:39:56.303469750Z" level=info msg="shim disconnected" id=782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56 namespace=k8s.io Sep 12 17:39:56.303529 containerd[1439]: time="2025-09-12T17:39:56.303525390Z" level=warning msg="cleaning up after shim disconnected" id=782675a42f505fcb92f7046041323f7e501ff74ca6d94083aeff568a81663a56 namespace=k8s.io Sep 12 17:39:56.303529 containerd[1439]: time="2025-09-12T17:39:56.303549830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:56.674053 kubelet[2491]: E0912 17:39:56.674004 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pms6" podUID="0e93698b-a189-45ca-894b-4585a51c5842" Sep 12 17:39:56.740798 kubelet[2491]: I0912 17:39:56.740768 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:39:56.741406 kubelet[2491]: E0912 17:39:56.741382 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:39:56.741555 containerd[1439]: time="2025-09-12T17:39:56.741508032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:39:56.756782 kubelet[2491]: I0912 17:39:56.756677 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74cd4d7cdd-2bnjs" podStartSLOduration=2.812457236 podStartE2EDuration="4.756662029s" podCreationTimestamp="2025-09-12 17:39:52 +0000 UTC" firstStartedPulling="2025-09-12 17:39:53.123966745 +0000 UTC m=+23.545312914" lastFinishedPulling="2025-09-12 17:39:55.068171578 +0000 UTC m=+25.489517707" observedRunningTime="2025-09-12 17:39:55.748388825 +0000 UTC m=+26.169735074" watchObservedRunningTime="2025-09-12 17:39:56.756662029 +0000 UTC m=+27.178008198" Sep 12 17:39:58.673855 kubelet[2491]: E0912 17:39:58.673811 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pms6" podUID="0e93698b-a189-45ca-894b-4585a51c5842" Sep 12 17:39:59.823162 containerd[1439]: time="2025-09-12T17:39:59.823113196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:59.823974 containerd[1439]: time="2025-09-12T17:39:59.823549435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:39:59.824696 containerd[1439]: time="2025-09-12T17:39:59.824673392Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:59.827155 containerd[1439]: time="2025-09-12T17:39:59.827106306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:59.827791 containerd[1439]: time="2025-09-12T17:39:59.827761145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.086205193s" Sep 12 17:39:59.827866 containerd[1439]: time="2025-09-12T17:39:59.827794024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:39:59.841799 containerd[1439]: time="2025-09-12T17:39:59.841755629Z" level=info msg="CreateContainer within sandbox \"7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:39:59.866748 containerd[1439]: time="2025-09-12T17:39:59.866711245Z" level=info msg="CreateContainer within sandbox \"7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3\"" Sep 12 17:39:59.867443 containerd[1439]: time="2025-09-12T17:39:59.867368243Z" level=info msg="StartContainer for \"ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3\"" Sep 12 17:39:59.897695 systemd[1]: Started cri-containerd-ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3.scope - libcontainer container ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3. Sep 12 17:39:59.918900 containerd[1439]: time="2025-09-12T17:39:59.918859112Z" level=info msg="StartContainer for \"ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3\" returns successfully" Sep 12 17:40:00.464567 systemd[1]: cri-containerd-ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3.scope: Deactivated successfully. Sep 12 17:40:00.494245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3-rootfs.mount: Deactivated successfully. Sep 12 17:40:00.528888 kubelet[2491]: I0912 17:40:00.528856 2491 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:40:00.560040 containerd[1439]: time="2025-09-12T17:40:00.559976881Z" level=info msg="shim disconnected" id=ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3 namespace=k8s.io Sep 12 17:40:00.560040 containerd[1439]: time="2025-09-12T17:40:00.560033201Z" level=warning msg="cleaning up after shim disconnected" id=ca519451e098e025621e846b965665268221d43d9148cf28172f2b852c7166a3 namespace=k8s.io Sep 12 17:40:00.560040 containerd[1439]: time="2025-09-12T17:40:00.560043281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:00.609349 systemd[1]: Created slice kubepods-besteffort-pod911054c3_b540_44ec_b7d0_e107c4c14fea.slice - libcontainer container kubepods-besteffort-pod911054c3_b540_44ec_b7d0_e107c4c14fea.slice. Sep 12 17:40:00.618760 systemd[1]: Created slice kubepods-besteffort-poddc49e149_a094_4d79_a8c7_27e8dff370b3.slice - libcontainer container kubepods-besteffort-poddc49e149_a094_4d79_a8c7_27e8dff370b3.slice. Sep 12 17:40:00.625792 systemd[1]: Created slice kubepods-burstable-pod78c55e80_8b1c_48d6_a5e7_2c7abb2426e4.slice - libcontainer container kubepods-burstable-pod78c55e80_8b1c_48d6_a5e7_2c7abb2426e4.slice. Sep 12 17:40:00.630499 systemd[1]: Created slice kubepods-besteffort-pod3a4561d4_7536_4a44_be2c_afac80d7a063.slice - libcontainer container kubepods-besteffort-pod3a4561d4_7536_4a44_be2c_afac80d7a063.slice. Sep 12 17:40:00.635969 systemd[1]: Created slice kubepods-besteffort-pod1546fa5b_41ca_4825_9d46_eec66efc2e80.slice - libcontainer container kubepods-besteffort-pod1546fa5b_41ca_4825_9d46_eec66efc2e80.slice. Sep 12 17:40:00.641013 systemd[1]: Created slice kubepods-burstable-pod5a8ad9ed_e732_4605_8bb5_90e319ea4f13.slice - libcontainer container kubepods-burstable-pod5a8ad9ed_e732_4605_8bb5_90e319ea4f13.slice. Sep 12 17:40:00.645952 systemd[1]: Created slice kubepods-besteffort-pod249c9c72_dc41_4c2e_9f20_c59454807552.slice - libcontainer container kubepods-besteffort-pod249c9c72_dc41_4c2e_9f20_c59454807552.slice. Sep 12 17:40:00.665180 kubelet[2491]: I0912 17:40:00.665140 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hwl\" (UniqueName: \"kubernetes.io/projected/1546fa5b-41ca-4825-9d46-eec66efc2e80-kube-api-access-42hwl\") pod \"calico-kube-controllers-855787cfcf-fvkd7\" (UID: \"1546fa5b-41ca-4825-9d46-eec66efc2e80\") " pod="calico-system/calico-kube-controllers-855787cfcf-fvkd7" Sep 12 17:40:00.665180 kubelet[2491]: I0912 17:40:00.665184 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pcr\" (UniqueName: \"kubernetes.io/projected/249c9c72-dc41-4c2e-9f20-c59454807552-kube-api-access-h4pcr\") pod \"calico-apiserver-bc44ff76c-zs9w2\" (UID: \"249c9c72-dc41-4c2e-9f20-c59454807552\") " pod="calico-apiserver/calico-apiserver-bc44ff76c-zs9w2" Sep 12 17:40:00.665334 kubelet[2491]: I0912 17:40:00.665204 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pllrk\" (UniqueName: \"kubernetes.io/projected/dc49e149-a094-4d79-a8c7-27e8dff370b3-kube-api-access-pllrk\") pod \"calico-apiserver-bc44ff76c-46zh9\" (UID: \"dc49e149-a094-4d79-a8c7-27e8dff370b3\") " pod="calico-apiserver/calico-apiserver-bc44ff76c-46zh9" Sep 12 17:40:00.665334 kubelet[2491]: I0912 17:40:00.665275 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a4561d4-7536-4a44-be2c-afac80d7a063-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-726gn\" (UID: \"3a4561d4-7536-4a44-be2c-afac80d7a063\") " pod="calico-system/goldmane-54d579b49d-726gn" Sep 12 17:40:00.665334 kubelet[2491]: I0912 17:40:00.665324 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfcqd\" (UniqueName: \"kubernetes.io/projected/911054c3-b540-44ec-b7d0-e107c4c14fea-kube-api-access-xfcqd\") pod \"whisker-86877d4794-4zh8m\" (UID: \"911054c3-b540-44ec-b7d0-e107c4c14fea\") " pod="calico-system/whisker-86877d4794-4zh8m" Sep 12 17:40:00.665400 kubelet[2491]: I0912 17:40:00.665342 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3a4561d4-7536-4a44-be2c-afac80d7a063-goldmane-key-pair\") pod \"goldmane-54d579b49d-726gn\" (UID: \"3a4561d4-7536-4a44-be2c-afac80d7a063\") " pod="calico-system/goldmane-54d579b49d-726gn" Sep 12 17:40:00.665400 kubelet[2491]: I0912 17:40:00.665365 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9gfs\" (UniqueName: \"kubernetes.io/projected/5a8ad9ed-e732-4605-8bb5-90e319ea4f13-kube-api-access-t9gfs\") pod \"coredns-674b8bbfcf-hdkml\" (UID: \"5a8ad9ed-e732-4605-8bb5-90e319ea4f13\") " pod="kube-system/coredns-674b8bbfcf-hdkml" Sep 12 17:40:00.665400 kubelet[2491]: I0912 17:40:00.665383 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-ca-bundle\") pod \"whisker-86877d4794-4zh8m\" (UID: \"911054c3-b540-44ec-b7d0-e107c4c14fea\") " pod="calico-system/whisker-86877d4794-4zh8m" Sep 12 17:40:00.669176 kubelet[2491]: I0912 17:40:00.665415 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78c55e80-8b1c-48d6-a5e7-2c7abb2426e4-config-volume\") pod \"coredns-674b8bbfcf-qgttl\" (UID: \"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4\") " pod="kube-system/coredns-674b8bbfcf-qgttl" Sep 12 17:40:00.669176 kubelet[2491]: I0912 17:40:00.666889 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcmk\" (UniqueName: \"kubernetes.io/projected/78c55e80-8b1c-48d6-a5e7-2c7abb2426e4-kube-api-access-6qcmk\") pod \"coredns-674b8bbfcf-qgttl\" (UID: \"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4\") " pod="kube-system/coredns-674b8bbfcf-qgttl" Sep 12 17:40:00.669176 kubelet[2491]: I0912 17:40:00.666912 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4561d4-7536-4a44-be2c-afac80d7a063-config\") pod \"goldmane-54d579b49d-726gn\" (UID: \"3a4561d4-7536-4a44-be2c-afac80d7a063\") " pod="calico-system/goldmane-54d579b49d-726gn" Sep 12 17:40:00.669176 kubelet[2491]: I0912 17:40:00.666931 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-backend-key-pair\") pod \"whisker-86877d4794-4zh8m\" (UID: \"911054c3-b540-44ec-b7d0-e107c4c14fea\") " pod="calico-system/whisker-86877d4794-4zh8m" Sep 12 17:40:00.669176 kubelet[2491]: I0912 17:40:00.666945 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a8ad9ed-e732-4605-8bb5-90e319ea4f13-config-volume\") pod \"coredns-674b8bbfcf-hdkml\" (UID: \"5a8ad9ed-e732-4605-8bb5-90e319ea4f13\") " pod="kube-system/coredns-674b8bbfcf-hdkml" Sep 12 17:40:00.669391 kubelet[2491]: I0912 17:40:00.666959 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc49e149-a094-4d79-a8c7-27e8dff370b3-calico-apiserver-certs\") pod \"calico-apiserver-bc44ff76c-46zh9\" (UID: \"dc49e149-a094-4d79-a8c7-27e8dff370b3\") " pod="calico-apiserver/calico-apiserver-bc44ff76c-46zh9" Sep 12 17:40:00.669391 kubelet[2491]: I0912 17:40:00.666977 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqs5\" (UniqueName: \"kubernetes.io/projected/3a4561d4-7536-4a44-be2c-afac80d7a063-kube-api-access-ssqs5\") pod \"goldmane-54d579b49d-726gn\" (UID: \"3a4561d4-7536-4a44-be2c-afac80d7a063\") " pod="calico-system/goldmane-54d579b49d-726gn" Sep 12 17:40:00.669391 kubelet[2491]: I0912 17:40:00.666999 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1546fa5b-41ca-4825-9d46-eec66efc2e80-tigera-ca-bundle\") pod \"calico-kube-controllers-855787cfcf-fvkd7\" (UID: \"1546fa5b-41ca-4825-9d46-eec66efc2e80\") " pod="calico-system/calico-kube-controllers-855787cfcf-fvkd7" Sep 12 17:40:00.669391 kubelet[2491]: I0912 17:40:00.667015 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/249c9c72-dc41-4c2e-9f20-c59454807552-calico-apiserver-certs\") pod \"calico-apiserver-bc44ff76c-zs9w2\" (UID: \"249c9c72-dc41-4c2e-9f20-c59454807552\") " pod="calico-apiserver/calico-apiserver-bc44ff76c-zs9w2" Sep 12 17:40:00.678630 systemd[1]: Created slice kubepods-besteffort-pod0e93698b_a189_45ca_894b_4585a51c5842.slice - libcontainer container kubepods-besteffort-pod0e93698b_a189_45ca_894b_4585a51c5842.slice. Sep 12 17:40:00.680715 containerd[1439]: time="2025-09-12T17:40:00.680679063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pms6,Uid:0e93698b-a189-45ca-894b-4585a51c5842,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:00.751140 containerd[1439]: time="2025-09-12T17:40:00.751092169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:40:00.801446 containerd[1439]: time="2025-09-12T17:40:00.801354285Z" level=error msg="Failed to destroy network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:00.801952 containerd[1439]: time="2025-09-12T17:40:00.801710444Z" level=error msg="encountered an error cleaning up failed sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:00.801952 containerd[1439]: time="2025-09-12T17:40:00.801762404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pms6,Uid:0e93698b-a189-45ca-894b-4585a51c5842,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:00.805036 kubelet[2491]: E0912 17:40:00.804989 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:00.805147 kubelet[2491]: E0912 17:40:00.805064 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pms6" Sep 12 17:40:00.805147 kubelet[2491]: E0912 17:40:00.805091 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pms6" Sep 12 17:40:00.805264 kubelet[2491]: E0912 17:40:00.805144 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7pms6_calico-system(0e93698b-a189-45ca-894b-4585a51c5842)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7pms6_calico-system(0e93698b-a189-45ca-894b-4585a51c5842)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pms6" podUID="0e93698b-a189-45ca-894b-4585a51c5842" Sep 12 17:40:00.914699 containerd[1439]: time="2025-09-12T17:40:00.914657325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86877d4794-4zh8m,Uid:911054c3-b540-44ec-b7d0-e107c4c14fea,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:00.924568 containerd[1439]: time="2025-09-12T17:40:00.923004344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-46zh9,Uid:dc49e149-a094-4d79-a8c7-27e8dff370b3,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:00.929559 kubelet[2491]: E0912 17:40:00.929505 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:00.930010 containerd[1439]: time="2025-09-12T17:40:00.929896767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qgttl,Uid:78c55e80-8b1c-48d6-a5e7-2c7abb2426e4,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:00.933945 containerd[1439]: time="2025-09-12T17:40:00.933912117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-726gn,Uid:3a4561d4-7536-4a44-be2c-afac80d7a063,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:00.939570 containerd[1439]: time="2025-09-12T17:40:00.939350864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855787cfcf-fvkd7,Uid:1546fa5b-41ca-4825-9d46-eec66efc2e80,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:00.945302 kubelet[2491]: E0912 17:40:00.945136 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:00.950159 containerd[1439]: time="2025-09-12T17:40:00.949098280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-zs9w2,Uid:249c9c72-dc41-4c2e-9f20-c59454807552,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:00.950159 containerd[1439]: time="2025-09-12T17:40:00.949803238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdkml,Uid:5a8ad9ed-e732-4605-8bb5-90e319ea4f13,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:01.002751 containerd[1439]: time="2025-09-12T17:40:01.001425391Z" level=error msg="Failed to destroy network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.003932 containerd[1439]: time="2025-09-12T17:40:01.003807105Z" level=error msg="encountered an error cleaning up failed sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.003932 containerd[1439]: time="2025-09-12T17:40:01.003871465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86877d4794-4zh8m,Uid:911054c3-b540-44ec-b7d0-e107c4c14fea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.004145 kubelet[2491]: E0912 17:40:01.004073 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.004145 kubelet[2491]: E0912 17:40:01.004140 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86877d4794-4zh8m" Sep 12 17:40:01.004249 kubelet[2491]: E0912 17:40:01.004160 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86877d4794-4zh8m" Sep 12 17:40:01.004249 kubelet[2491]: E0912 17:40:01.004202 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86877d4794-4zh8m_calico-system(911054c3-b540-44ec-b7d0-e107c4c14fea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86877d4794-4zh8m_calico-system(911054c3-b540-44ec-b7d0-e107c4c14fea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86877d4794-4zh8m" podUID="911054c3-b540-44ec-b7d0-e107c4c14fea" Sep 12 17:40:01.029400 containerd[1439]: time="2025-09-12T17:40:01.029268044Z" level=error msg="Failed to destroy network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.030876 containerd[1439]: time="2025-09-12T17:40:01.030832480Z" level=error msg="encountered an error cleaning up failed sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.031289 containerd[1439]: time="2025-09-12T17:40:01.031161999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-46zh9,Uid:dc49e149-a094-4d79-a8c7-27e8dff370b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.031949 kubelet[2491]: E0912 17:40:01.031873 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.032117 kubelet[2491]: E0912 17:40:01.031949 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc44ff76c-46zh9" Sep 12 17:40:01.032117 kubelet[2491]: E0912 17:40:01.031975 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc44ff76c-46zh9" Sep 12 17:40:01.032117 kubelet[2491]: E0912 17:40:01.032022 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc44ff76c-46zh9_calico-apiserver(dc49e149-a094-4d79-a8c7-27e8dff370b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc44ff76c-46zh9_calico-apiserver(dc49e149-a094-4d79-a8c7-27e8dff370b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc44ff76c-46zh9" podUID="dc49e149-a094-4d79-a8c7-27e8dff370b3" Sep 12 17:40:01.039042 containerd[1439]: time="2025-09-12T17:40:01.038941821Z" level=error msg="Failed to destroy network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.039345 containerd[1439]: time="2025-09-12T17:40:01.039316820Z" level=error msg="encountered an error cleaning up failed sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.039667 containerd[1439]: time="2025-09-12T17:40:01.039641419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qgttl,Uid:78c55e80-8b1c-48d6-a5e7-2c7abb2426e4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.042584 kubelet[2491]: E0912 17:40:01.042169 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.042584 kubelet[2491]: E0912 17:40:01.042242 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qgttl" Sep 12 17:40:01.042584 kubelet[2491]: E0912 17:40:01.042260 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qgttl" Sep 12 17:40:01.042850 kubelet[2491]: E0912 17:40:01.042306 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qgttl_kube-system(78c55e80-8b1c-48d6-a5e7-2c7abb2426e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qgttl_kube-system(78c55e80-8b1c-48d6-a5e7-2c7abb2426e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qgttl" podUID="78c55e80-8b1c-48d6-a5e7-2c7abb2426e4" Sep 12 17:40:01.057144 containerd[1439]: time="2025-09-12T17:40:01.056994137Z" level=error msg="Failed to destroy network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.057955 containerd[1439]: time="2025-09-12T17:40:01.057884255Z" level=error msg="encountered an error cleaning up failed sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.058139 containerd[1439]: time="2025-09-12T17:40:01.057935415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-zs9w2,Uid:249c9c72-dc41-4c2e-9f20-c59454807552,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.058615 kubelet[2491]: E0912 17:40:01.058578 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.058688 kubelet[2491]: E0912 17:40:01.058629 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc44ff76c-zs9w2" Sep 12 17:40:01.058688 kubelet[2491]: E0912 17:40:01.058659 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc44ff76c-zs9w2" Sep 12 17:40:01.059070 kubelet[2491]: E0912 17:40:01.058703 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc44ff76c-zs9w2_calico-apiserver(249c9c72-dc41-4c2e-9f20-c59454807552)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc44ff76c-zs9w2_calico-apiserver(249c9c72-dc41-4c2e-9f20-c59454807552)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc44ff76c-zs9w2" podUID="249c9c72-dc41-4c2e-9f20-c59454807552" Sep 12 17:40:01.064181 containerd[1439]: time="2025-09-12T17:40:01.064130160Z" level=error msg="Failed to destroy network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.064770 containerd[1439]: time="2025-09-12T17:40:01.064724039Z" level=error msg="encountered an error cleaning up failed sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.065023 containerd[1439]: time="2025-09-12T17:40:01.064973198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855787cfcf-fvkd7,Uid:1546fa5b-41ca-4825-9d46-eec66efc2e80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.065479 kubelet[2491]: E0912 17:40:01.065432 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.065564 kubelet[2491]: E0912 17:40:01.065489 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855787cfcf-fvkd7" Sep 12 17:40:01.065564 kubelet[2491]: E0912 17:40:01.065507 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855787cfcf-fvkd7" Sep 12 17:40:01.065620 kubelet[2491]: E0912 17:40:01.065556 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-855787cfcf-fvkd7_calico-system(1546fa5b-41ca-4825-9d46-eec66efc2e80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-855787cfcf-fvkd7_calico-system(1546fa5b-41ca-4825-9d46-eec66efc2e80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855787cfcf-fvkd7" podUID="1546fa5b-41ca-4825-9d46-eec66efc2e80" Sep 12 17:40:01.067652 containerd[1439]: time="2025-09-12T17:40:01.067623272Z" level=error msg="Failed to destroy network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.069056 containerd[1439]: time="2025-09-12T17:40:01.068929909Z" level=error msg="Failed to destroy network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.069922 containerd[1439]: time="2025-09-12T17:40:01.069891547Z" level=error msg="encountered an error cleaning up failed sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.070033 containerd[1439]: time="2025-09-12T17:40:01.070012186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-726gn,Uid:3a4561d4-7536-4a44-be2c-afac80d7a063,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.070296 kubelet[2491]: E0912 17:40:01.070258 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.072112 containerd[1439]: time="2025-09-12T17:40:01.072064821Z" level=error msg="encountered an error cleaning up failed sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.072255 containerd[1439]: time="2025-09-12T17:40:01.072225661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdkml,Uid:5a8ad9ed-e732-4605-8bb5-90e319ea4f13,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.072453 kubelet[2491]: E0912 17:40:01.072426 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.072502 kubelet[2491]: E0912 17:40:01.072465 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hdkml" Sep 12 17:40:01.072502 kubelet[2491]: E0912 17:40:01.072484 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hdkml" Sep 12 17:40:01.072605 kubelet[2491]: E0912 17:40:01.072518 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hdkml_kube-system(5a8ad9ed-e732-4605-8bb5-90e319ea4f13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hdkml_kube-system(5a8ad9ed-e732-4605-8bb5-90e319ea4f13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hdkml" podUID="5a8ad9ed-e732-4605-8bb5-90e319ea4f13" Sep 12 17:40:01.072892 kubelet[2491]: E0912 17:40:01.070305 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-726gn" Sep 12 17:40:01.072936 kubelet[2491]: E0912 17:40:01.072899 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-726gn" Sep 12 17:40:01.072972 kubelet[2491]: E0912 17:40:01.072950 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-726gn_calico-system(3a4561d4-7536-4a44-be2c-afac80d7a063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-726gn_calico-system(3a4561d4-7536-4a44-be2c-afac80d7a063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-726gn" podUID="3a4561d4-7536-4a44-be2c-afac80d7a063" Sep 12 17:40:01.761349 kubelet[2491]: I0912 17:40:01.761313 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:01.763049 kubelet[2491]: I0912 17:40:01.762775 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:01.763492 containerd[1439]: time="2025-09-12T17:40:01.762858926Z" level=info msg="StopPodSandbox for \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\"" Sep 12 17:40:01.763492 containerd[1439]: time="2025-09-12T17:40:01.763216566Z" level=info msg="Ensure that sandbox 2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa in task-service has been cleanup successfully" Sep 12 17:40:01.763732 kubelet[2491]: I0912 17:40:01.763707 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:01.764768 containerd[1439]: time="2025-09-12T17:40:01.764435283Z" level=info msg="StopPodSandbox for \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\"" Sep 12 17:40:01.764768 containerd[1439]: time="2025-09-12T17:40:01.764450643Z" level=info msg="StopPodSandbox for \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\"" Sep 12 17:40:01.764768 containerd[1439]: time="2025-09-12T17:40:01.764591082Z" level=info msg="Ensure that sandbox 88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426 in task-service has been cleanup successfully" Sep 12 17:40:01.764768 containerd[1439]: time="2025-09-12T17:40:01.764685602Z" level=info msg="Ensure that sandbox 35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd in task-service has been cleanup successfully" Sep 12 17:40:01.767813 kubelet[2491]: I0912 17:40:01.767779 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:01.769110 containerd[1439]: time="2025-09-12T17:40:01.769069471Z" level=info msg="StopPodSandbox for \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\"" Sep 12 17:40:01.769867 containerd[1439]: time="2025-09-12T17:40:01.769446111Z" level=info msg="Ensure that sandbox dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f in task-service has been cleanup successfully" Sep 12 17:40:01.783412 kubelet[2491]: I0912 17:40:01.783388 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:01.784867 containerd[1439]: time="2025-09-12T17:40:01.784766314Z" level=info msg="StopPodSandbox for \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\"" Sep 12 17:40:01.785552 containerd[1439]: time="2025-09-12T17:40:01.785207433Z" level=info msg="Ensure that sandbox b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef in task-service has been cleanup successfully" Sep 12 17:40:01.785927 kubelet[2491]: I0912 17:40:01.785837 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:01.786353 containerd[1439]: time="2025-09-12T17:40:01.786326630Z" level=info msg="StopPodSandbox for \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\"" Sep 12 17:40:01.786617 containerd[1439]: time="2025-09-12T17:40:01.786593270Z" level=info msg="Ensure that sandbox bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12 in task-service has been cleanup successfully" Sep 12 17:40:01.792184 kubelet[2491]: I0912 17:40:01.792149 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:01.793292 containerd[1439]: time="2025-09-12T17:40:01.792814335Z" level=info msg="StopPodSandbox for \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\"" Sep 12 17:40:01.793292 containerd[1439]: time="2025-09-12T17:40:01.793044374Z" level=info msg="Ensure that sandbox c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52 in task-service has been cleanup successfully" Sep 12 17:40:01.794341 kubelet[2491]: I0912 17:40:01.794236 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:01.794866 containerd[1439]: time="2025-09-12T17:40:01.794751690Z" level=info msg="StopPodSandbox for \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\"" Sep 12 17:40:01.794947 containerd[1439]: time="2025-09-12T17:40:01.794921570Z" level=info msg="Ensure that sandbox 79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156 in task-service has been cleanup successfully" Sep 12 17:40:01.802927 containerd[1439]: time="2025-09-12T17:40:01.802879470Z" level=error msg="StopPodSandbox for \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\" failed" error="failed to destroy network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.803199 kubelet[2491]: E0912 17:40:01.803159 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:01.806123 kubelet[2491]: E0912 17:40:01.806039 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426"} Sep 12 17:40:01.806201 kubelet[2491]: E0912 17:40:01.806131 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a4561d4-7536-4a44-be2c-afac80d7a063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.806201 kubelet[2491]: E0912 17:40:01.806153 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a4561d4-7536-4a44-be2c-afac80d7a063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-726gn" podUID="3a4561d4-7536-4a44-be2c-afac80d7a063" Sep 12 17:40:01.828063 containerd[1439]: time="2025-09-12T17:40:01.827973650Z" level=error msg="StopPodSandbox for \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\" failed" error="failed to destroy network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.829558 kubelet[2491]: E0912 17:40:01.828438 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:01.829558 kubelet[2491]: E0912 17:40:01.828505 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd"} Sep 12 17:40:01.829558 kubelet[2491]: E0912 17:40:01.828577 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e93698b-a189-45ca-894b-4585a51c5842\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.829558 kubelet[2491]: E0912 17:40:01.828603 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e93698b-a189-45ca-894b-4585a51c5842\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pms6" podUID="0e93698b-a189-45ca-894b-4585a51c5842" Sep 12 17:40:01.833531 containerd[1439]: time="2025-09-12T17:40:01.833494117Z" level=error msg="StopPodSandbox for \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\" failed" error="failed to destroy network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.833707 kubelet[2491]: E0912 17:40:01.833677 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:01.833753 kubelet[2491]: E0912 17:40:01.833719 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef"} Sep 12 17:40:01.833786 kubelet[2491]: E0912 17:40:01.833761 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1546fa5b-41ca-4825-9d46-eec66efc2e80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.833831 kubelet[2491]: E0912 17:40:01.833790 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1546fa5b-41ca-4825-9d46-eec66efc2e80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855787cfcf-fvkd7" podUID="1546fa5b-41ca-4825-9d46-eec66efc2e80" Sep 12 17:40:01.843746 containerd[1439]: time="2025-09-12T17:40:01.843228174Z" level=error msg="StopPodSandbox for \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\" failed" error="failed to destroy network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.844061 kubelet[2491]: E0912 17:40:01.843942 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:01.844061 kubelet[2491]: E0912 17:40:01.843990 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa"} Sep 12 17:40:01.844061 kubelet[2491]: E0912 17:40:01.844016 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"249c9c72-dc41-4c2e-9f20-c59454807552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.844061 kubelet[2491]: E0912 17:40:01.844036 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"249c9c72-dc41-4c2e-9f20-c59454807552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc44ff76c-zs9w2" podUID="249c9c72-dc41-4c2e-9f20-c59454807552" Sep 12 17:40:01.854380 containerd[1439]: time="2025-09-12T17:40:01.854339067Z" level=error msg="StopPodSandbox for \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\" failed" error="failed to destroy network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.854738 kubelet[2491]: E0912 17:40:01.854705 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:01.854803 kubelet[2491]: E0912 17:40:01.854750 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f"} Sep 12 17:40:01.854803 kubelet[2491]: E0912 17:40:01.854777 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a8ad9ed-e732-4605-8bb5-90e319ea4f13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.854879 kubelet[2491]: E0912 17:40:01.854796 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a8ad9ed-e732-4605-8bb5-90e319ea4f13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hdkml" podUID="5a8ad9ed-e732-4605-8bb5-90e319ea4f13" Sep 12 17:40:01.855347 containerd[1439]: time="2025-09-12T17:40:01.855295785Z" level=error msg="StopPodSandbox for \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\" failed" error="failed to destroy network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.855514 kubelet[2491]: E0912 17:40:01.855468 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:01.856572 containerd[1439]: time="2025-09-12T17:40:01.855601384Z" level=error msg="StopPodSandbox for \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\" failed" error="failed to destroy network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.856633 kubelet[2491]: E0912 17:40:01.856044 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:01.856633 kubelet[2491]: E0912 17:40:01.856100 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52"} Sep 12 17:40:01.856633 kubelet[2491]: E0912 17:40:01.856140 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"911054c3-b540-44ec-b7d0-e107c4c14fea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.856633 kubelet[2491]: E0912 17:40:01.856158 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"911054c3-b540-44ec-b7d0-e107c4c14fea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86877d4794-4zh8m" podUID="911054c3-b540-44ec-b7d0-e107c4c14fea" Sep 12 17:40:01.856848 kubelet[2491]: E0912 17:40:01.856815 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12"} Sep 12 17:40:01.856884 kubelet[2491]: E0912 17:40:01.856856 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.856884 kubelet[2491]: E0912 17:40:01.856876 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qgttl" podUID="78c55e80-8b1c-48d6-a5e7-2c7abb2426e4" Sep 12 17:40:01.860501 containerd[1439]: time="2025-09-12T17:40:01.860454173Z" level=error msg="StopPodSandbox for \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\" failed" error="failed to destroy network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:01.860899 kubelet[2491]: E0912 17:40:01.860867 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:01.860944 kubelet[2491]: E0912 17:40:01.860905 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156"} Sep 12 17:40:01.860944 kubelet[2491]: E0912 17:40:01.860929 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc49e149-a094-4d79-a8c7-27e8dff370b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:01.861017 kubelet[2491]: E0912 17:40:01.860947 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc49e149-a094-4d79-a8c7-27e8dff370b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc44ff76c-46zh9" podUID="dc49e149-a094-4d79-a8c7-27e8dff370b3" Sep 12 17:40:01.865711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156-shm.mount: Deactivated successfully. Sep 12 17:40:01.865795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52-shm.mount: Deactivated successfully. Sep 12 17:40:04.680741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698192800.mount: Deactivated successfully. Sep 12 17:40:04.945268 containerd[1439]: time="2025-09-12T17:40:04.944737578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:04.945700 containerd[1439]: time="2025-09-12T17:40:04.945607256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:40:04.946706 containerd[1439]: time="2025-09-12T17:40:04.946674734Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:04.950728 containerd[1439]: time="2025-09-12T17:40:04.950694485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:04.951467 containerd[1439]: time="2025-09-12T17:40:04.951230444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.200095275s" Sep 12 17:40:04.951467 containerd[1439]: time="2025-09-12T17:40:04.951261764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:40:04.962646 containerd[1439]: time="2025-09-12T17:40:04.962555019Z" level=info msg="CreateContainer within sandbox \"7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:40:04.975045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943096752.mount: Deactivated successfully. Sep 12 17:40:04.986340 containerd[1439]: time="2025-09-12T17:40:04.986295767Z" level=info msg="CreateContainer within sandbox \"7c0d8b73446b43b2d3d8c32aef154e33d93b0e0b7fcc7399319b43a7757f6408\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"82cef001682356c1f2b0e5e54bcbda6a49f89eb57f8b01fe88376fe38325d72c\"" Sep 12 17:40:04.987569 containerd[1439]: time="2025-09-12T17:40:04.986800366Z" level=info msg="StartContainer for \"82cef001682356c1f2b0e5e54bcbda6a49f89eb57f8b01fe88376fe38325d72c\"" Sep 12 17:40:05.048723 systemd[1]: Started cri-containerd-82cef001682356c1f2b0e5e54bcbda6a49f89eb57f8b01fe88376fe38325d72c.scope - libcontainer container 82cef001682356c1f2b0e5e54bcbda6a49f89eb57f8b01fe88376fe38325d72c. Sep 12 17:40:05.070988 containerd[1439]: time="2025-09-12T17:40:05.070892225Z" level=info msg="StartContainer for \"82cef001682356c1f2b0e5e54bcbda6a49f89eb57f8b01fe88376fe38325d72c\" returns successfully" Sep 12 17:40:05.184910 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:40:05.185203 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:40:05.272628 containerd[1439]: time="2025-09-12T17:40:05.272591594Z" level=info msg="StopPodSandbox for \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\"" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.333 [INFO][3785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.334 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" iface="eth0" netns="/var/run/netns/cni-a0853541-657f-67c9-ea8b-6f48b7dc0c3c" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.335 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" iface="eth0" netns="/var/run/netns/cni-a0853541-657f-67c9-ea8b-6f48b7dc0c3c" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.336 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" iface="eth0" netns="/var/run/netns/cni-a0853541-657f-67c9-ea8b-6f48b7dc0c3c" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.336 [INFO][3785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.336 [INFO][3785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.412 [INFO][3796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.412 [INFO][3796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.412 [INFO][3796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.422 [WARNING][3796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.422 [INFO][3796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.424 [INFO][3796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:05.428652 containerd[1439]: 2025-09-12 17:40:05.426 [INFO][3785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:05.428652 containerd[1439]: time="2025-09-12T17:40:05.428598060Z" level=info msg="TearDown network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\" successfully" Sep 12 17:40:05.428652 containerd[1439]: time="2025-09-12T17:40:05.428624060Z" level=info msg="StopPodSandbox for \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\" returns successfully" Sep 12 17:40:05.499844 kubelet[2491]: I0912 17:40:05.499809 2491 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-backend-key-pair\") pod \"911054c3-b540-44ec-b7d0-e107c4c14fea\" (UID: \"911054c3-b540-44ec-b7d0-e107c4c14fea\") " Sep 12 17:40:05.500302 kubelet[2491]: I0912 17:40:05.499854 2491 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-ca-bundle\") pod \"911054c3-b540-44ec-b7d0-e107c4c14fea\" (UID: \"911054c3-b540-44ec-b7d0-e107c4c14fea\") " Sep 12 17:40:05.500302 kubelet[2491]: I0912 17:40:05.499873 2491 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfcqd\" (UniqueName: \"kubernetes.io/projected/911054c3-b540-44ec-b7d0-e107c4c14fea-kube-api-access-xfcqd\") pod \"911054c3-b540-44ec-b7d0-e107c4c14fea\" (UID: \"911054c3-b540-44ec-b7d0-e107c4c14fea\") " Sep 12 17:40:05.517387 kubelet[2491]: I0912 17:40:05.517084 2491 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "911054c3-b540-44ec-b7d0-e107c4c14fea" (UID: "911054c3-b540-44ec-b7d0-e107c4c14fea"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:40:05.517387 kubelet[2491]: I0912 17:40:05.517091 2491 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911054c3-b540-44ec-b7d0-e107c4c14fea-kube-api-access-xfcqd" (OuterVolumeSpecName: "kube-api-access-xfcqd") pod "911054c3-b540-44ec-b7d0-e107c4c14fea" (UID: "911054c3-b540-44ec-b7d0-e107c4c14fea"). InnerVolumeSpecName "kube-api-access-xfcqd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:40:05.517387 kubelet[2491]: I0912 17:40:05.517338 2491 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "911054c3-b540-44ec-b7d0-e107c4c14fea" (UID: "911054c3-b540-44ec-b7d0-e107c4c14fea"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:40:05.600890 kubelet[2491]: I0912 17:40:05.600780 2491 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:40:05.600890 kubelet[2491]: I0912 17:40:05.600815 2491 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/911054c3-b540-44ec-b7d0-e107c4c14fea-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:40:05.600890 kubelet[2491]: I0912 17:40:05.600826 2491 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfcqd\" (UniqueName: \"kubernetes.io/projected/911054c3-b540-44ec-b7d0-e107c4c14fea-kube-api-access-xfcqd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:40:05.681569 systemd[1]: run-netns-cni\x2da0853541\x2d657f\x2d67c9\x2dea8b\x2d6f48b7dc0c3c.mount: Deactivated successfully. Sep 12 17:40:05.681657 systemd[1]: var-lib-kubelet-pods-911054c3\x2db540\x2d44ec\x2db7d0\x2de107c4c14fea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfcqd.mount: Deactivated successfully. Sep 12 17:40:05.681722 systemd[1]: var-lib-kubelet-pods-911054c3\x2db540\x2d44ec\x2db7d0\x2de107c4c14fea-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:40:05.683269 systemd[1]: Removed slice kubepods-besteffort-pod911054c3_b540_44ec_b7d0_e107c4c14fea.slice - libcontainer container kubepods-besteffort-pod911054c3_b540_44ec_b7d0_e107c4c14fea.slice. Sep 12 17:40:05.821109 kubelet[2491]: I0912 17:40:05.821054 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xfg5x" podStartSLOduration=2.258633524 podStartE2EDuration="13.821031541s" podCreationTimestamp="2025-09-12 17:39:52 +0000 UTC" firstStartedPulling="2025-09-12 17:39:53.389931785 +0000 UTC m=+23.811277954" lastFinishedPulling="2025-09-12 17:40:04.952329802 +0000 UTC m=+35.373675971" observedRunningTime="2025-09-12 17:40:05.820833701 +0000 UTC m=+36.242179870" watchObservedRunningTime="2025-09-12 17:40:05.821031541 +0000 UTC m=+36.242377670" Sep 12 17:40:05.889980 systemd[1]: Created slice kubepods-besteffort-pod6ac22498_2ba9_4a80_ad1d_c9506b860224.slice - libcontainer container kubepods-besteffort-pod6ac22498_2ba9_4a80_ad1d_c9506b860224.slice. Sep 12 17:40:05.903669 kubelet[2491]: I0912 17:40:05.903322 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ac22498-2ba9-4a80-ad1d-c9506b860224-whisker-ca-bundle\") pod \"whisker-559d486dcc-s9bnl\" (UID: \"6ac22498-2ba9-4a80-ad1d-c9506b860224\") " pod="calico-system/whisker-559d486dcc-s9bnl" Sep 12 17:40:05.903669 kubelet[2491]: I0912 17:40:05.903365 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5wj\" (UniqueName: \"kubernetes.io/projected/6ac22498-2ba9-4a80-ad1d-c9506b860224-kube-api-access-qg5wj\") pod \"whisker-559d486dcc-s9bnl\" (UID: \"6ac22498-2ba9-4a80-ad1d-c9506b860224\") " pod="calico-system/whisker-559d486dcc-s9bnl" Sep 12 17:40:05.903669 kubelet[2491]: I0912 17:40:05.903383 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ac22498-2ba9-4a80-ad1d-c9506b860224-whisker-backend-key-pair\") pod \"whisker-559d486dcc-s9bnl\" (UID: \"6ac22498-2ba9-4a80-ad1d-c9506b860224\") " pod="calico-system/whisker-559d486dcc-s9bnl" Sep 12 17:40:06.193985 containerd[1439]: time="2025-09-12T17:40:06.193879674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-559d486dcc-s9bnl,Uid:6ac22498-2ba9-4a80-ad1d-c9506b860224,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:06.314801 systemd-networkd[1382]: cali0226dce7c10: Link UP Sep 12 17:40:06.314970 systemd-networkd[1382]: cali0226dce7c10: Gained carrier Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.222 [INFO][3818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.239 [INFO][3818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--559d486dcc--s9bnl-eth0 whisker-559d486dcc- calico-system 6ac22498-2ba9-4a80-ad1d-c9506b860224 912 0 2025-09-12 17:40:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:559d486dcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-559d486dcc-s9bnl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0226dce7c10 [] [] }} ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.239 [INFO][3818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.275 [INFO][3833] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" HandleID="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Workload="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.275 [INFO][3833] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" HandleID="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Workload="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-559d486dcc-s9bnl", "timestamp":"2025-09-12 17:40:06.275459584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.275 [INFO][3833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.275 [INFO][3833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.275 [INFO][3833] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.286 [INFO][3833] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.290 [INFO][3833] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.293 [INFO][3833] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.295 [INFO][3833] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.297 [INFO][3833] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.297 [INFO][3833] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.298 [INFO][3833] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6 Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.302 [INFO][3833] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.307 [INFO][3833] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.307 [INFO][3833] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" host="localhost" Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.307 [INFO][3833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:06.326313 containerd[1439]: 2025-09-12 17:40:06.307 [INFO][3833] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" HandleID="k8s-pod-network.d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Workload="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" Sep 12 17:40:06.327021 containerd[1439]: 2025-09-12 17:40:06.309 [INFO][3818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--559d486dcc--s9bnl-eth0", GenerateName:"whisker-559d486dcc-", Namespace:"calico-system", SelfLink:"", UID:"6ac22498-2ba9-4a80-ad1d-c9506b860224", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"559d486dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-559d486dcc-s9bnl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0226dce7c10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:06.327021 containerd[1439]: 2025-09-12 17:40:06.309 [INFO][3818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" Sep 12 17:40:06.327021 containerd[1439]: 2025-09-12 17:40:06.309 [INFO][3818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0226dce7c10 ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" Sep 12 17:40:06.327021 containerd[1439]: 2025-09-12 17:40:06.315 [INFO][3818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" Sep 12 17:40:06.327021 containerd[1439]: 2025-09-12 17:40:06.315 [INFO][3818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--559d486dcc--s9bnl-eth0", GenerateName:"whisker-559d486dcc-", Namespace:"calico-system", SelfLink:"", UID:"6ac22498-2ba9-4a80-ad1d-c9506b860224", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"559d486dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6", Pod:"whisker-559d486dcc-s9bnl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0226dce7c10", MAC:"ea:e2:6c:5b:89:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:06.327021 containerd[1439]: 2025-09-12 17:40:06.324 [INFO][3818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6" Namespace="calico-system" Pod="whisker-559d486dcc-s9bnl" WorkloadEndpoint="localhost-k8s-whisker--559d486dcc--s9bnl-eth0" Sep 12 17:40:06.340419 containerd[1439]: time="2025-09-12T17:40:06.340338569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:06.340419 containerd[1439]: time="2025-09-12T17:40:06.340387249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:06.340419 containerd[1439]: time="2025-09-12T17:40:06.340398329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:06.340761 containerd[1439]: time="2025-09-12T17:40:06.340466528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:06.354723 systemd[1]: Started cri-containerd-d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6.scope - libcontainer container d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6. Sep 12 17:40:06.364206 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:06.381033 containerd[1439]: time="2025-09-12T17:40:06.380941804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-559d486dcc-s9bnl,Uid:6ac22498-2ba9-4a80-ad1d-c9506b860224,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6\"" Sep 12 17:40:06.382605 containerd[1439]: time="2025-09-12T17:40:06.382433081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:40:06.811278 kubelet[2491]: I0912 17:40:06.811088 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:07.550457 containerd[1439]: time="2025-09-12T17:40:07.550411155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:07.551178 containerd[1439]: time="2025-09-12T17:40:07.550888274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:40:07.556004 containerd[1439]: time="2025-09-12T17:40:07.555952023Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:07.556809 containerd[1439]: time="2025-09-12T17:40:07.556780342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.174314101s" Sep 12 17:40:07.556986 containerd[1439]: time="2025-09-12T17:40:07.556895501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:40:07.557571 containerd[1439]: time="2025-09-12T17:40:07.557529900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:07.560940 containerd[1439]: time="2025-09-12T17:40:07.560902933Z" level=info msg="CreateContainer within sandbox \"d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:40:07.574457 containerd[1439]: time="2025-09-12T17:40:07.574417986Z" level=info msg="CreateContainer within sandbox \"d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"19fa5356fbce2094a67c43cf81113615f8a0c088cd7cc46b15511240e23655ea\"" Sep 12 17:40:07.576039 containerd[1439]: time="2025-09-12T17:40:07.574975905Z" level=info msg="StartContainer for \"19fa5356fbce2094a67c43cf81113615f8a0c088cd7cc46b15511240e23655ea\"" Sep 12 17:40:07.604179 systemd[1]: Started cri-containerd-19fa5356fbce2094a67c43cf81113615f8a0c088cd7cc46b15511240e23655ea.scope - libcontainer container 19fa5356fbce2094a67c43cf81113615f8a0c088cd7cc46b15511240e23655ea. Sep 12 17:40:07.631880 containerd[1439]: time="2025-09-12T17:40:07.631833229Z" level=info msg="StartContainer for \"19fa5356fbce2094a67c43cf81113615f8a0c088cd7cc46b15511240e23655ea\" returns successfully" Sep 12 17:40:07.632941 containerd[1439]: time="2025-09-12T17:40:07.632916107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:40:07.678079 kubelet[2491]: I0912 17:40:07.678016 2491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911054c3-b540-44ec-b7d0-e107c4c14fea" path="/var/lib/kubelet/pods/911054c3-b540-44ec-b7d0-e107c4c14fea/volumes" Sep 12 17:40:07.782673 systemd-networkd[1382]: cali0226dce7c10: Gained IPv6LL Sep 12 17:40:09.742067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440102943.mount: Deactivated successfully. Sep 12 17:40:09.756902 containerd[1439]: time="2025-09-12T17:40:09.756858227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:09.757808 containerd[1439]: time="2025-09-12T17:40:09.757642465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:40:09.758562 containerd[1439]: time="2025-09-12T17:40:09.758502943Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:09.866038 containerd[1439]: time="2025-09-12T17:40:09.865962535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:09.866843 containerd[1439]: time="2025-09-12T17:40:09.866801253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.233851346s" Sep 12 17:40:09.866843 containerd[1439]: time="2025-09-12T17:40:09.866839773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:40:09.872529 containerd[1439]: time="2025-09-12T17:40:09.872491522Z" level=info msg="CreateContainer within sandbox \"d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:40:09.900359 containerd[1439]: time="2025-09-12T17:40:09.900310348Z" level=info msg="CreateContainer within sandbox \"d3629fa03600bdd861daac954c5d02f07e70cd2dca6a4466b93d29baa9c9bbb6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"94b5b393a70bc17d8eca1d866d3c6cae189473b24c553c4f9c61c7b831f6830f\"" Sep 12 17:40:09.901245 containerd[1439]: time="2025-09-12T17:40:09.901209706Z" level=info msg="StartContainer for \"94b5b393a70bc17d8eca1d866d3c6cae189473b24c553c4f9c61c7b831f6830f\"" Sep 12 17:40:09.940675 systemd[1]: Started cri-containerd-94b5b393a70bc17d8eca1d866d3c6cae189473b24c553c4f9c61c7b831f6830f.scope - libcontainer container 94b5b393a70bc17d8eca1d866d3c6cae189473b24c553c4f9c61c7b831f6830f. Sep 12 17:40:09.967054 containerd[1439]: time="2025-09-12T17:40:09.967009339Z" level=info msg="StartContainer for \"94b5b393a70bc17d8eca1d866d3c6cae189473b24c553c4f9c61c7b831f6830f\" returns successfully" Sep 12 17:40:10.830973 kubelet[2491]: I0912 17:40:10.830915 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-559d486dcc-s9bnl" podStartSLOduration=2.345259407 podStartE2EDuration="5.830898657s" podCreationTimestamp="2025-09-12 17:40:05 +0000 UTC" firstStartedPulling="2025-09-12 17:40:06.382250561 +0000 UTC m=+36.803596690" lastFinishedPulling="2025-09-12 17:40:09.867889771 +0000 UTC m=+40.289235940" observedRunningTime="2025-09-12 17:40:10.830532218 +0000 UTC m=+41.251878347" watchObservedRunningTime="2025-09-12 17:40:10.830898657 +0000 UTC m=+41.252244826" Sep 12 17:40:12.674688 containerd[1439]: time="2025-09-12T17:40:12.674647049Z" level=info msg="StopPodSandbox for \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\"" Sep 12 17:40:12.675046 containerd[1439]: time="2025-09-12T17:40:12.674976608Z" level=info msg="StopPodSandbox for \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\"" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.721 [INFO][4232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.721 [INFO][4232] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" iface="eth0" netns="/var/run/netns/cni-bd67ff3f-b769-c052-bd00-2ff16dc65b28" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.722 [INFO][4232] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" iface="eth0" netns="/var/run/netns/cni-bd67ff3f-b769-c052-bd00-2ff16dc65b28" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.722 [INFO][4232] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" iface="eth0" netns="/var/run/netns/cni-bd67ff3f-b769-c052-bd00-2ff16dc65b28" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.722 [INFO][4232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.722 [INFO][4232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.744 [INFO][4247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.744 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.744 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.755 [WARNING][4247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.755 [INFO][4247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.758 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:12.764870 containerd[1439]: 2025-09-12 17:40:12.761 [INFO][4232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:12.766356 containerd[1439]: time="2025-09-12T17:40:12.765901402Z" level=info msg="TearDown network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\" successfully" Sep 12 17:40:12.766356 containerd[1439]: time="2025-09-12T17:40:12.765946322Z" level=info msg="StopPodSandbox for \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\" returns successfully" Sep 12 17:40:12.768275 containerd[1439]: time="2025-09-12T17:40:12.767870319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-zs9w2,Uid:249c9c72-dc41-4c2e-9f20-c59454807552,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:12.768408 systemd[1]: run-netns-cni\x2dbd67ff3f\x2db769\x2dc052\x2dbd00\x2d2ff16dc65b28.mount: Deactivated successfully. Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.724 [INFO][4233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.724 [INFO][4233] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" iface="eth0" netns="/var/run/netns/cni-750148d7-582c-e054-a4ec-5df720941f3f" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.725 [INFO][4233] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" iface="eth0" netns="/var/run/netns/cni-750148d7-582c-e054-a4ec-5df720941f3f" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.725 [INFO][4233] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" iface="eth0" netns="/var/run/netns/cni-750148d7-582c-e054-a4ec-5df720941f3f" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.725 [INFO][4233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.725 [INFO][4233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.745 [INFO][4255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.746 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.758 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.767 [WARNING][4255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.767 [INFO][4255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.769 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:12.772851 containerd[1439]: 2025-09-12 17:40:12.771 [INFO][4233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:12.773519 containerd[1439]: time="2025-09-12T17:40:12.772909350Z" level=info msg="TearDown network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\" successfully" Sep 12 17:40:12.773519 containerd[1439]: time="2025-09-12T17:40:12.772932150Z" level=info msg="StopPodSandbox for \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\" returns successfully" Sep 12 17:40:12.773519 containerd[1439]: time="2025-09-12T17:40:12.773422189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-46zh9,Uid:dc49e149-a094-4d79-a8c7-27e8dff370b3,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:12.775041 systemd[1]: run-netns-cni\x2d750148d7\x2d582c\x2de054\x2da4ec\x2d5df720941f3f.mount: Deactivated successfully. Sep 12 17:40:12.888492 systemd-networkd[1382]: cali4a5e9517ad5: Link UP Sep 12 17:40:12.888983 systemd-networkd[1382]: cali4a5e9517ad5: Gained carrier Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.806 [INFO][4277] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.821 [INFO][4277] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0 calico-apiserver-bc44ff76c- calico-apiserver 249c9c72-dc41-4c2e-9f20-c59454807552 948 0 2025-09-12 17:39:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc44ff76c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bc44ff76c-zs9w2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a5e9517ad5 [] [] }} ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.821 [INFO][4277] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.843 [INFO][4294] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" HandleID="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.843 [INFO][4294] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" HandleID="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000129f30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bc44ff76c-zs9w2", "timestamp":"2025-09-12 17:40:12.843689781 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.843 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.843 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.843 [INFO][4294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.852 [INFO][4294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.864 [INFO][4294] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.867 [INFO][4294] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.869 [INFO][4294] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.871 [INFO][4294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.871 [INFO][4294] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.872 [INFO][4294] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4 Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.875 [INFO][4294] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.880 [INFO][4294] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.880 [INFO][4294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" host="localhost" Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.880 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:12.902030 containerd[1439]: 2025-09-12 17:40:12.880 [INFO][4294] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" HandleID="k8s-pod-network.b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.902577 containerd[1439]: 2025-09-12 17:40:12.883 [INFO][4277] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"249c9c72-dc41-4c2e-9f20-c59454807552", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bc44ff76c-zs9w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5e9517ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:12.902577 containerd[1439]: 2025-09-12 17:40:12.883 [INFO][4277] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.902577 containerd[1439]: 2025-09-12 17:40:12.883 [INFO][4277] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a5e9517ad5 ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.902577 containerd[1439]: 2025-09-12 17:40:12.889 [INFO][4277] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.902577 containerd[1439]: 2025-09-12 17:40:12.890 [INFO][4277] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"249c9c72-dc41-4c2e-9f20-c59454807552", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4", Pod:"calico-apiserver-bc44ff76c-zs9w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5e9517ad5", MAC:"0e:2c:a8:da:79:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:12.902577 containerd[1439]: 2025-09-12 17:40:12.899 [INFO][4277] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-zs9w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:12.915393 containerd[1439]: time="2025-09-12T17:40:12.915175770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:12.915393 containerd[1439]: time="2025-09-12T17:40:12.915228730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:12.915393 containerd[1439]: time="2025-09-12T17:40:12.915244170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:12.915393 containerd[1439]: time="2025-09-12T17:40:12.915312930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:12.940737 systemd[1]: Started cri-containerd-b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4.scope - libcontainer container b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4. Sep 12 17:40:12.959088 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:12.984947 containerd[1439]: time="2025-09-12T17:40:12.984908723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-zs9w2,Uid:249c9c72-dc41-4c2e-9f20-c59454807552,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4\"" Sep 12 17:40:12.988677 containerd[1439]: time="2025-09-12T17:40:12.988600797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:12.997178 systemd-networkd[1382]: calic40effe1a98: Link UP Sep 12 17:40:12.997375 systemd-networkd[1382]: calic40effe1a98: Gained carrier Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.803 [INFO][4267] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.821 [INFO][4267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0 calico-apiserver-bc44ff76c- calico-apiserver dc49e149-a094-4d79-a8c7-27e8dff370b3 949 0 2025-09-12 17:39:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc44ff76c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bc44ff76c-46zh9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic40effe1a98 [] [] }} ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.821 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.847 [INFO][4300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" HandleID="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.847 [INFO][4300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" HandleID="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bc44ff76c-46zh9", "timestamp":"2025-09-12 17:40:12.847029895 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.847 [INFO][4300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.880 [INFO][4300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.880 [INFO][4300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.954 [INFO][4300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.965 [INFO][4300] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.972 [INFO][4300] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.975 [INFO][4300] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.978 [INFO][4300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.978 [INFO][4300] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.980 [INFO][4300] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0 Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.986 [INFO][4300] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.992 [INFO][4300] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.992 [INFO][4300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" host="localhost" Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.992 [INFO][4300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:13.008986 containerd[1439]: 2025-09-12 17:40:12.992 [INFO][4300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" HandleID="k8s-pod-network.86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:13.009529 containerd[1439]: 2025-09-12 17:40:12.994 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc49e149-a094-4d79-a8c7-27e8dff370b3", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bc44ff76c-46zh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic40effe1a98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:13.009529 containerd[1439]: 2025-09-12 17:40:12.994 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:13.009529 containerd[1439]: 2025-09-12 17:40:12.994 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic40effe1a98 ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:13.009529 containerd[1439]: 2025-09-12 17:40:12.996 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:13.009529 containerd[1439]: 2025-09-12 17:40:12.996 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc49e149-a094-4d79-a8c7-27e8dff370b3", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0", Pod:"calico-apiserver-bc44ff76c-46zh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic40effe1a98", MAC:"52:8f:c5:9e:50:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:13.009529 containerd[1439]: 2025-09-12 17:40:13.006 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0" Namespace="calico-apiserver" Pod="calico-apiserver-bc44ff76c-46zh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:13.028300 containerd[1439]: time="2025-09-12T17:40:13.028195165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:13.028569 containerd[1439]: time="2025-09-12T17:40:13.028325165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:13.028569 containerd[1439]: time="2025-09-12T17:40:13.028357245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:13.029794 containerd[1439]: time="2025-09-12T17:40:13.028597125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:13.058126 systemd[1]: Started cri-containerd-86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0.scope - libcontainer container 86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0. Sep 12 17:40:13.076473 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:13.098627 containerd[1439]: time="2025-09-12T17:40:13.098590800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc44ff76c-46zh9,Uid:dc49e149-a094-4d79-a8c7-27e8dff370b3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0\"" Sep 12 17:40:13.675032 containerd[1439]: time="2025-09-12T17:40:13.674757570Z" level=info msg="StopPodSandbox for \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\"" Sep 12 17:40:13.675032 containerd[1439]: time="2025-09-12T17:40:13.674758210Z" level=info msg="StopPodSandbox for \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\"" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4453] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" iface="eth0" netns="/var/run/netns/cni-3ce799a1-2ddb-d966-b0fb-381c40073cba" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4453] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" iface="eth0" netns="/var/run/netns/cni-3ce799a1-2ddb-d966-b0fb-381c40073cba" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4453] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" iface="eth0" netns="/var/run/netns/cni-3ce799a1-2ddb-d966-b0fb-381c40073cba" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.740 [INFO][4472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.741 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.741 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.749 [WARNING][4472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.749 [INFO][4472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.751 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:13.756440 containerd[1439]: 2025-09-12 17:40:13.754 [INFO][4453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:13.756812 containerd[1439]: time="2025-09-12T17:40:13.756572424Z" level=info msg="TearDown network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\" successfully" Sep 12 17:40:13.756812 containerd[1439]: time="2025-09-12T17:40:13.756599504Z" level=info msg="StopPodSandbox for \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\" returns successfully" Sep 12 17:40:13.756897 kubelet[2491]: E0912 17:40:13.756849 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:13.757742 containerd[1439]: time="2025-09-12T17:40:13.757716982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdkml,Uid:5a8ad9ed-e732-4605-8bb5-90e319ea4f13,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.720 [INFO][4452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.720 [INFO][4452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" iface="eth0" netns="/var/run/netns/cni-78485be3-9580-6736-0827-15f5c6a9a513" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.720 [INFO][4452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" iface="eth0" netns="/var/run/netns/cni-78485be3-9580-6736-0827-15f5c6a9a513" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.721 [INFO][4452] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" iface="eth0" netns="/var/run/netns/cni-78485be3-9580-6736-0827-15f5c6a9a513" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.722 [INFO][4452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.742 [INFO][4470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.742 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.751 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.761 [WARNING][4470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.761 [INFO][4470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.762 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:13.766359 containerd[1439]: 2025-09-12 17:40:13.764 [INFO][4452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:13.766979 containerd[1439]: time="2025-09-12T17:40:13.766485966Z" level=info msg="TearDown network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\" successfully" Sep 12 17:40:13.766979 containerd[1439]: time="2025-09-12T17:40:13.766507286Z" level=info msg="StopPodSandbox for \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\" returns successfully" Sep 12 17:40:13.767452 containerd[1439]: time="2025-09-12T17:40:13.767406484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pms6,Uid:0e93698b-a189-45ca-894b-4585a51c5842,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:13.768701 systemd[1]: run-netns-cni\x2d3ce799a1\x2d2ddb\x2dd966\x2db0fb\x2d381c40073cba.mount: Deactivated successfully. Sep 12 17:40:13.773100 systemd[1]: run-netns-cni\x2d78485be3\x2d9580\x2d6736\x2d0827\x2d15f5c6a9a513.mount: Deactivated successfully. Sep 12 17:40:13.881137 systemd-networkd[1382]: calia199acd65d2: Link UP Sep 12 17:40:13.881321 systemd-networkd[1382]: calia199acd65d2: Gained carrier Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.797 [INFO][4486] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.812 [INFO][4486] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hdkml-eth0 coredns-674b8bbfcf- kube-system 5a8ad9ed-e732-4605-8bb5-90e319ea4f13 963 0 2025-09-12 17:39:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hdkml eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia199acd65d2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.813 [INFO][4486] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.842 [INFO][4513] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" HandleID="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.842 [INFO][4513] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" HandleID="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hdkml", "timestamp":"2025-09-12 17:40:13.84241519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.842 [INFO][4513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.842 [INFO][4513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.842 [INFO][4513] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.851 [INFO][4513] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.857 [INFO][4513] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.861 [INFO][4513] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.863 [INFO][4513] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.865 [INFO][4513] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.865 [INFO][4513] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.867 [INFO][4513] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.870 [INFO][4513] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.875 [INFO][4513] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.875 [INFO][4513] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" host="localhost" Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.875 [INFO][4513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:13.893640 containerd[1439]: 2025-09-12 17:40:13.875 [INFO][4513] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" HandleID="k8s-pod-network.13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.894152 containerd[1439]: 2025-09-12 17:40:13.878 [INFO][4486] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hdkml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a8ad9ed-e732-4605-8bb5-90e319ea4f13", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hdkml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia199acd65d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:13.894152 containerd[1439]: 2025-09-12 17:40:13.878 [INFO][4486] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.894152 containerd[1439]: 2025-09-12 17:40:13.878 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia199acd65d2 ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.894152 containerd[1439]: 2025-09-12 17:40:13.880 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.894152 containerd[1439]: 2025-09-12 17:40:13.882 [INFO][4486] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hdkml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a8ad9ed-e732-4605-8bb5-90e319ea4f13", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc", Pod:"coredns-674b8bbfcf-hdkml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia199acd65d2", MAC:"26:bf:22:0d:9c:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:13.894152 containerd[1439]: 2025-09-12 17:40:13.890 [INFO][4486] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdkml" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:13.907686 containerd[1439]: time="2025-09-12T17:40:13.907620554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:13.907776 containerd[1439]: time="2025-09-12T17:40:13.907742714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:13.907798 containerd[1439]: time="2025-09-12T17:40:13.907771474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:13.907904 containerd[1439]: time="2025-09-12T17:40:13.907864953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:13.928780 systemd[1]: Started cri-containerd-13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc.scope - libcontainer container 13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc. Sep 12 17:40:13.939523 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:13.955804 containerd[1439]: time="2025-09-12T17:40:13.955669228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdkml,Uid:5a8ad9ed-e732-4605-8bb5-90e319ea4f13,Namespace:kube-system,Attempt:1,} returns sandbox id \"13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc\"" Sep 12 17:40:13.957655 kubelet[2491]: E0912 17:40:13.957626 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:13.963583 containerd[1439]: time="2025-09-12T17:40:13.962946895Z" level=info msg="CreateContainer within sandbox \"13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:13.977186 containerd[1439]: time="2025-09-12T17:40:13.976884070Z" level=info msg="CreateContainer within sandbox \"13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a448a7e29fac891e894985a927cc7a3118bc2476fc107090b8859599b81c71b\"" Sep 12 17:40:13.978928 containerd[1439]: time="2025-09-12T17:40:13.978095308Z" level=info msg="StartContainer for \"9a448a7e29fac891e894985a927cc7a3118bc2476fc107090b8859599b81c71b\"" Sep 12 17:40:13.982959 systemd-networkd[1382]: cali350b959255e: Link UP Sep 12 17:40:13.983191 systemd-networkd[1382]: cali350b959255e: Gained carrier Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.815 [INFO][4499] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.832 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7pms6-eth0 csi-node-driver- calico-system 0e93698b-a189-45ca-894b-4585a51c5842 962 0 2025-09-12 17:39:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7pms6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali350b959255e [] [] }} ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.832 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.861 [INFO][4521] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" HandleID="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.862 [INFO][4521] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" HandleID="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7pms6", "timestamp":"2025-09-12 17:40:13.861839236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.862 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.875 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.875 [INFO][4521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.952 [INFO][4521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.957 [INFO][4521] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.962 [INFO][4521] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.963 [INFO][4521] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.966 [INFO][4521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.966 [INFO][4521] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.967 [INFO][4521] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6 Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.971 [INFO][4521] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.978 [INFO][4521] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.978 [INFO][4521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" host="localhost" Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.978 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:13.998471 containerd[1439]: 2025-09-12 17:40:13.978 [INFO][4521] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" HandleID="k8s-pod-network.a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:14.000106 containerd[1439]: 2025-09-12 17:40:13.981 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pms6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e93698b-a189-45ca-894b-4585a51c5842", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7pms6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali350b959255e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:14.000106 containerd[1439]: 2025-09-12 17:40:13.981 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:14.000106 containerd[1439]: 2025-09-12 17:40:13.981 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali350b959255e ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:14.000106 containerd[1439]: 2025-09-12 17:40:13.983 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:14.000106 containerd[1439]: 2025-09-12 17:40:13.984 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pms6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e93698b-a189-45ca-894b-4585a51c5842", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6", Pod:"csi-node-driver-7pms6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali350b959255e", MAC:"ea:51:9b:ed:cd:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:14.000106 containerd[1439]: 2025-09-12 17:40:13.995 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6" Namespace="calico-system" Pod="csi-node-driver-7pms6" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:14.013702 systemd[1]: Started cri-containerd-9a448a7e29fac891e894985a927cc7a3118bc2476fc107090b8859599b81c71b.scope - libcontainer container 9a448a7e29fac891e894985a927cc7a3118bc2476fc107090b8859599b81c71b. Sep 12 17:40:14.019271 containerd[1439]: time="2025-09-12T17:40:14.015821161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:14.019271 containerd[1439]: time="2025-09-12T17:40:14.015875481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:14.019271 containerd[1439]: time="2025-09-12T17:40:14.015890121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:14.019271 containerd[1439]: time="2025-09-12T17:40:14.015957241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:14.035723 systemd[1]: Started cri-containerd-a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6.scope - libcontainer container a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6. Sep 12 17:40:14.050977 containerd[1439]: time="2025-09-12T17:40:14.050930499Z" level=info msg="StartContainer for \"9a448a7e29fac891e894985a927cc7a3118bc2476fc107090b8859599b81c71b\" returns successfully" Sep 12 17:40:14.060762 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:14.081786 containerd[1439]: time="2025-09-12T17:40:14.081745445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pms6,Uid:0e93698b-a189-45ca-894b-4585a51c5842,Namespace:calico-system,Attempt:1,} returns sandbox id \"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6\"" Sep 12 17:40:14.182764 systemd-networkd[1382]: calic40effe1a98: Gained IPv6LL Sep 12 17:40:14.568001 systemd-networkd[1382]: cali4a5e9517ad5: Gained IPv6LL Sep 12 17:40:14.674635 containerd[1439]: time="2025-09-12T17:40:14.674598485Z" level=info msg="StopPodSandbox for \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\"" Sep 12 17:40:14.675160 containerd[1439]: time="2025-09-12T17:40:14.675127764Z" level=info msg="StopPodSandbox for \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\"" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.738 [INFO][4718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.738 [INFO][4718] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" iface="eth0" netns="/var/run/netns/cni-1ffadff0-29c1-ecb9-6639-28796ef87544" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.738 [INFO][4718] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" iface="eth0" netns="/var/run/netns/cni-1ffadff0-29c1-ecb9-6639-28796ef87544" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.739 [INFO][4718] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" iface="eth0" netns="/var/run/netns/cni-1ffadff0-29c1-ecb9-6639-28796ef87544" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.739 [INFO][4718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.739 [INFO][4718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.758 [INFO][4741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.758 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.758 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.769 [WARNING][4741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.769 [INFO][4741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.772 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:14.779620 containerd[1439]: 2025-09-12 17:40:14.774 [INFO][4718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:14.780037 containerd[1439]: time="2025-09-12T17:40:14.779839300Z" level=info msg="TearDown network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\" successfully" Sep 12 17:40:14.780037 containerd[1439]: time="2025-09-12T17:40:14.779869780Z" level=info msg="StopPodSandbox for \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\" returns successfully" Sep 12 17:40:14.781347 containerd[1439]: time="2025-09-12T17:40:14.781121418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855787cfcf-fvkd7,Uid:1546fa5b-41ca-4825-9d46-eec66efc2e80,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:14.782493 systemd[1]: run-netns-cni\x2d1ffadff0\x2d29c1\x2decb9\x2d6639\x2d28796ef87544.mount: Deactivated successfully. Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.731 [INFO][4719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.731 [INFO][4719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" iface="eth0" netns="/var/run/netns/cni-357b96a2-344a-5a2a-ea12-9cdad9fcf3e4" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.732 [INFO][4719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" iface="eth0" netns="/var/run/netns/cni-357b96a2-344a-5a2a-ea12-9cdad9fcf3e4" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.733 [INFO][4719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" iface="eth0" netns="/var/run/netns/cni-357b96a2-344a-5a2a-ea12-9cdad9fcf3e4" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.733 [INFO][4719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.733 [INFO][4719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.765 [INFO][4734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.765 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.772 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.784 [WARNING][4734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.784 [INFO][4734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.786 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:14.793639 containerd[1439]: 2025-09-12 17:40:14.790 [INFO][4719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:14.794004 containerd[1439]: time="2025-09-12T17:40:14.793766716Z" level=info msg="TearDown network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\" successfully" Sep 12 17:40:14.794004 containerd[1439]: time="2025-09-12T17:40:14.793786596Z" level=info msg="StopPodSandbox for \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\" returns successfully" Sep 12 17:40:14.794121 kubelet[2491]: E0912 17:40:14.794096 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:14.797123 containerd[1439]: time="2025-09-12T17:40:14.795699113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qgttl,Uid:78c55e80-8b1c-48d6-a5e7-2c7abb2426e4,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:14.796859 systemd[1]: run-netns-cni\x2d357b96a2\x2d344a\x2d5a2a\x2dea12\x2d9cdad9fcf3e4.mount: Deactivated successfully. Sep 12 17:40:14.836356 kubelet[2491]: E0912 17:40:14.836071 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:14.856251 kubelet[2491]: I0912 17:40:14.856190 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hdkml" podStartSLOduration=38.856173727 podStartE2EDuration="38.856173727s" podCreationTimestamp="2025-09-12 17:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:14.855898087 +0000 UTC m=+45.277244256" watchObservedRunningTime="2025-09-12 17:40:14.856173727 +0000 UTC m=+45.277519936" Sep 12 17:40:14.940907 systemd-networkd[1382]: califa7dd627fd3: Link UP Sep 12 17:40:14.941359 systemd-networkd[1382]: califa7dd627fd3: Gained carrier Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.836 [INFO][4761] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.861 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qgttl-eth0 coredns-674b8bbfcf- kube-system 78c55e80-8b1c-48d6-a5e7-2c7abb2426e4 979 0 2025-09-12 17:39:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qgttl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa7dd627fd3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.861 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.892 [INFO][4780] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" HandleID="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.892 [INFO][4780] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" HandleID="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qgttl", "timestamp":"2025-09-12 17:40:14.892619863 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.892 [INFO][4780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.892 [INFO][4780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.892 [INFO][4780] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.905 [INFO][4780] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.909 [INFO][4780] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.915 [INFO][4780] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.917 [INFO][4780] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.919 [INFO][4780] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.919 [INFO][4780] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.921 [INFO][4780] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.925 [INFO][4780] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.933 [INFO][4780] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.933 [INFO][4780] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" host="localhost" Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.933 [INFO][4780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:14.965197 containerd[1439]: 2025-09-12 17:40:14.933 [INFO][4780] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" HandleID="k8s-pod-network.1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.965773 containerd[1439]: 2025-09-12 17:40:14.936 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qgttl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qgttl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7dd627fd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:14.965773 containerd[1439]: 2025-09-12 17:40:14.937 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.965773 containerd[1439]: 2025-09-12 17:40:14.937 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa7dd627fd3 ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.965773 containerd[1439]: 2025-09-12 17:40:14.942 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.965773 containerd[1439]: 2025-09-12 17:40:14.943 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qgttl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb", Pod:"coredns-674b8bbfcf-qgttl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7dd627fd3", MAC:"46:2f:9d:29:8d:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:14.965773 containerd[1439]: 2025-09-12 17:40:14.960 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb" Namespace="kube-system" Pod="coredns-674b8bbfcf-qgttl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:14.983268 containerd[1439]: time="2025-09-12T17:40:14.981661186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:14.983268 containerd[1439]: time="2025-09-12T17:40:14.982051666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:14.983268 containerd[1439]: time="2025-09-12T17:40:14.982064266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:14.983268 containerd[1439]: time="2025-09-12T17:40:14.982164425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.009702 systemd[1]: Started cri-containerd-1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb.scope - libcontainer container 1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb. Sep 12 17:40:15.027496 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:15.056770 systemd-networkd[1382]: cali08c1d0c86a0: Link UP Sep 12 17:40:15.057328 systemd-networkd[1382]: cali08c1d0c86a0: Gained carrier Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.841 [INFO][4750] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.863 [INFO][4750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0 calico-kube-controllers-855787cfcf- calico-system 1546fa5b-41ca-4825-9d46-eec66efc2e80 980 0 2025-09-12 17:39:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:855787cfcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-855787cfcf-fvkd7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali08c1d0c86a0 [] [] }} ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.863 [INFO][4750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.909 [INFO][4785] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" HandleID="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.909 [INFO][4785] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" HandleID="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-855787cfcf-fvkd7", "timestamp":"2025-09-12 17:40:14.909228553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.909 [INFO][4785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.933 [INFO][4785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:14.933 [INFO][4785] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.006 [INFO][4785] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.016 [INFO][4785] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.023 [INFO][4785] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.026 [INFO][4785] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.032 [INFO][4785] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.032 [INFO][4785] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.033 [INFO][4785] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094 Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.038 [INFO][4785] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.046 [INFO][4785] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.046 [INFO][4785] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" host="localhost" Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.046 [INFO][4785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:15.077525 containerd[1439]: 2025-09-12 17:40:15.046 [INFO][4785] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" HandleID="k8s-pod-network.eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:15.078132 containerd[1439]: 2025-09-12 17:40:15.052 [INFO][4750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0", GenerateName:"calico-kube-controllers-855787cfcf-", Namespace:"calico-system", SelfLink:"", UID:"1546fa5b-41ca-4825-9d46-eec66efc2e80", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855787cfcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-855787cfcf-fvkd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08c1d0c86a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:15.078132 containerd[1439]: 2025-09-12 17:40:15.052 [INFO][4750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:15.078132 containerd[1439]: 2025-09-12 17:40:15.052 [INFO][4750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08c1d0c86a0 ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:15.078132 containerd[1439]: 2025-09-12 17:40:15.058 [INFO][4750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:15.078132 containerd[1439]: 2025-09-12 17:40:15.059 [INFO][4750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0", GenerateName:"calico-kube-controllers-855787cfcf-", Namespace:"calico-system", SelfLink:"", UID:"1546fa5b-41ca-4825-9d46-eec66efc2e80", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855787cfcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094", Pod:"calico-kube-controllers-855787cfcf-fvkd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08c1d0c86a0", MAC:"ee:16:6a:63:82:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:15.078132 containerd[1439]: 2025-09-12 17:40:15.074 [INFO][4750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094" Namespace="calico-system" Pod="calico-kube-controllers-855787cfcf-fvkd7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:15.085450 containerd[1439]: time="2025-09-12T17:40:15.085307487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qgttl,Uid:78c55e80-8b1c-48d6-a5e7-2c7abb2426e4,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb\"" Sep 12 17:40:15.086055 kubelet[2491]: E0912 17:40:15.086026 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:15.090951 containerd[1439]: time="2025-09-12T17:40:15.090876077Z" level=info msg="CreateContainer within sandbox \"1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:15.105018 containerd[1439]: time="2025-09-12T17:40:15.104887013Z" level=info msg="CreateContainer within sandbox \"1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cedff2b01adba8cf6d010c703c6e7c79b3b7ca33a40679ab695955455e9b8a4d\"" Sep 12 17:40:15.106975 containerd[1439]: time="2025-09-12T17:40:15.106913770Z" level=info msg="StartContainer for \"cedff2b01adba8cf6d010c703c6e7c79b3b7ca33a40679ab695955455e9b8a4d\"" Sep 12 17:40:15.107748 containerd[1439]: time="2025-09-12T17:40:15.107211929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:15.107748 containerd[1439]: time="2025-09-12T17:40:15.107261249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:15.107748 containerd[1439]: time="2025-09-12T17:40:15.107278929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.107748 containerd[1439]: time="2025-09-12T17:40:15.107409009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.126710 systemd[1]: Started cri-containerd-eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094.scope - libcontainer container eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094. Sep 12 17:40:15.135104 systemd[1]: Started cri-containerd-cedff2b01adba8cf6d010c703c6e7c79b3b7ca33a40679ab695955455e9b8a4d.scope - libcontainer container cedff2b01adba8cf6d010c703c6e7c79b3b7ca33a40679ab695955455e9b8a4d. Sep 12 17:40:15.144809 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:15.176697 containerd[1439]: time="2025-09-12T17:40:15.176654050Z" level=info msg="StartContainer for \"cedff2b01adba8cf6d010c703c6e7c79b3b7ca33a40679ab695955455e9b8a4d\" returns successfully" Sep 12 17:40:15.199205 containerd[1439]: time="2025-09-12T17:40:15.199050451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855787cfcf-fvkd7,Uid:1546fa5b-41ca-4825-9d46-eec66efc2e80,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094\"" Sep 12 17:40:15.313842 containerd[1439]: time="2025-09-12T17:40:15.313786533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:15.316378 containerd[1439]: time="2025-09-12T17:40:15.316229329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:40:15.317353 containerd[1439]: time="2025-09-12T17:40:15.317325047Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:15.319762 containerd[1439]: time="2025-09-12T17:40:15.319721843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:15.320996 containerd[1439]: time="2025-09-12T17:40:15.320792521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.332151284s" Sep 12 17:40:15.320996 containerd[1439]: time="2025-09-12T17:40:15.320850001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:40:15.323690 containerd[1439]: time="2025-09-12T17:40:15.323159317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:15.326185 containerd[1439]: time="2025-09-12T17:40:15.326102192Z" level=info msg="CreateContainer within sandbox \"b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:15.337146 containerd[1439]: time="2025-09-12T17:40:15.337054053Z" level=info msg="CreateContainer within sandbox \"b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cd30737c4a17e9c88f5f38cf21a70c3877c718db9eefb0ffce8206e24552062f\"" Sep 12 17:40:15.337556 containerd[1439]: time="2025-09-12T17:40:15.337410132Z" level=info msg="StartContainer for \"cd30737c4a17e9c88f5f38cf21a70c3877c718db9eefb0ffce8206e24552062f\"" Sep 12 17:40:15.363753 systemd[1]: Started cri-containerd-cd30737c4a17e9c88f5f38cf21a70c3877c718db9eefb0ffce8206e24552062f.scope - libcontainer container cd30737c4a17e9c88f5f38cf21a70c3877c718db9eefb0ffce8206e24552062f. Sep 12 17:40:15.391825 containerd[1439]: time="2025-09-12T17:40:15.391786479Z" level=info msg="StartContainer for \"cd30737c4a17e9c88f5f38cf21a70c3877c718db9eefb0ffce8206e24552062f\" returns successfully" Sep 12 17:40:15.561210 systemd[1]: Started sshd@7-10.0.0.153:22-10.0.0.1:38788.service - OpenSSH per-connection server daemon (10.0.0.1:38788). Sep 12 17:40:15.611504 sshd[5004]: Accepted publickey for core from 10.0.0.1 port 38788 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:15.613133 sshd[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:15.617981 systemd-logind[1420]: New session 8 of user core. Sep 12 17:40:15.627720 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:40:15.627845 containerd[1439]: time="2025-09-12T17:40:15.622893960Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:15.627845 containerd[1439]: time="2025-09-12T17:40:15.623631239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:40:15.627845 containerd[1439]: time="2025-09-12T17:40:15.625476036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 302.288559ms" Sep 12 17:40:15.627845 containerd[1439]: time="2025-09-12T17:40:15.625502356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:40:15.627845 containerd[1439]: time="2025-09-12T17:40:15.626227195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:40:15.630923 containerd[1439]: time="2025-09-12T17:40:15.630385267Z" level=info msg="CreateContainer within sandbox \"86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:15.647318 containerd[1439]: time="2025-09-12T17:40:15.647287838Z" level=info msg="CreateContainer within sandbox \"86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f4accaab79e9840da34ffa99a2b6685e4fa554b6f07fc39cb2a4ffa1877248c1\"" Sep 12 17:40:15.647780 containerd[1439]: time="2025-09-12T17:40:15.647750877Z" level=info msg="StartContainer for \"f4accaab79e9840da34ffa99a2b6685e4fa554b6f07fc39cb2a4ffa1877248c1\"" Sep 12 17:40:15.654650 systemd-networkd[1382]: cali350b959255e: Gained IPv6LL Sep 12 17:40:15.675072 systemd[1]: Started cri-containerd-f4accaab79e9840da34ffa99a2b6685e4fa554b6f07fc39cb2a4ffa1877248c1.scope - libcontainer container f4accaab79e9840da34ffa99a2b6685e4fa554b6f07fc39cb2a4ffa1877248c1. Sep 12 17:40:15.717845 containerd[1439]: time="2025-09-12T17:40:15.717801957Z" level=info msg="StartContainer for \"f4accaab79e9840da34ffa99a2b6685e4fa554b6f07fc39cb2a4ffa1877248c1\" returns successfully" Sep 12 17:40:15.718985 systemd-networkd[1382]: calia199acd65d2: Gained IPv6LL Sep 12 17:40:15.855559 kubelet[2491]: E0912 17:40:15.852890 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:15.861344 kubelet[2491]: E0912 17:40:15.860967 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:15.873520 kubelet[2491]: I0912 17:40:15.873128 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bc44ff76c-zs9w2" podStartSLOduration=24.537335931 podStartE2EDuration="26.873114889s" podCreationTimestamp="2025-09-12 17:39:49 +0000 UTC" firstStartedPulling="2025-09-12 17:40:12.987158919 +0000 UTC m=+43.408505048" lastFinishedPulling="2025-09-12 17:40:15.322937837 +0000 UTC m=+45.744284006" observedRunningTime="2025-09-12 17:40:15.858212355 +0000 UTC m=+46.279558524" watchObservedRunningTime="2025-09-12 17:40:15.873114889 +0000 UTC m=+46.294461058" Sep 12 17:40:15.895292 kubelet[2491]: I0912 17:40:15.895240 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qgttl" podStartSLOduration=39.895190811 podStartE2EDuration="39.895190811s" podCreationTimestamp="2025-09-12 17:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:15.874416287 +0000 UTC m=+46.295762456" watchObservedRunningTime="2025-09-12 17:40:15.895190811 +0000 UTC m=+46.316536980" Sep 12 17:40:15.900375 kubelet[2491]: I0912 17:40:15.899715 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bc44ff76c-46zh9" podStartSLOduration=24.373286926 podStartE2EDuration="26.899702763s" podCreationTimestamp="2025-09-12 17:39:49 +0000 UTC" firstStartedPulling="2025-09-12 17:40:13.099712918 +0000 UTC m=+43.521059087" lastFinishedPulling="2025-09-12 17:40:15.626128755 +0000 UTC m=+46.047474924" observedRunningTime="2025-09-12 17:40:15.899612923 +0000 UTC m=+46.320959092" watchObservedRunningTime="2025-09-12 17:40:15.899702763 +0000 UTC m=+46.321048932" Sep 12 17:40:15.940829 sshd[5004]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:15.945763 systemd[1]: sshd@7-10.0.0.153:22-10.0.0.1:38788.service: Deactivated successfully. Sep 12 17:40:15.947845 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:40:15.948712 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:40:15.949619 systemd-logind[1420]: Removed session 8. Sep 12 17:40:16.294671 systemd-networkd[1382]: cali08c1d0c86a0: Gained IPv6LL Sep 12 17:40:16.675100 containerd[1439]: time="2025-09-12T17:40:16.674863846Z" level=info msg="StopPodSandbox for \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\"" Sep 12 17:40:16.754713 containerd[1439]: time="2025-09-12T17:40:16.753527113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:16.754713 containerd[1439]: time="2025-09-12T17:40:16.754008072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:40:16.757214 containerd[1439]: time="2025-09-12T17:40:16.757173226Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:16.760750 containerd[1439]: time="2025-09-12T17:40:16.760721300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:16.761496 containerd[1439]: time="2025-09-12T17:40:16.761430259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.135176305s" Sep 12 17:40:16.761496 containerd[1439]: time="2025-09-12T17:40:16.761460659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:40:16.765720 containerd[1439]: time="2025-09-12T17:40:16.765149773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:40:16.769008 containerd[1439]: time="2025-09-12T17:40:16.768954686Z" level=info msg="CreateContainer within sandbox \"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:40:16.789019 containerd[1439]: time="2025-09-12T17:40:16.788901933Z" level=info msg="CreateContainer within sandbox \"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1917681da0fb913a3196e3466306df0eef6fd8c257159ad800e266521138381b\"" Sep 12 17:40:16.791407 containerd[1439]: time="2025-09-12T17:40:16.790713649Z" level=info msg="StartContainer for \"1917681da0fb913a3196e3466306df0eef6fd8c257159ad800e266521138381b\"" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.734 [INFO][5102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.735 [INFO][5102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" iface="eth0" netns="/var/run/netns/cni-8357fded-0443-7b6d-4d9d-775655fc6cdd" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.735 [INFO][5102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" iface="eth0" netns="/var/run/netns/cni-8357fded-0443-7b6d-4d9d-775655fc6cdd" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.735 [INFO][5102] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" iface="eth0" netns="/var/run/netns/cni-8357fded-0443-7b6d-4d9d-775655fc6cdd" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.735 [INFO][5102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.735 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.776 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.776 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.776 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.795 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.795 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.797 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:16.815216 containerd[1439]: 2025-09-12 17:40:16.802 [INFO][5102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:16.816341 containerd[1439]: time="2025-09-12T17:40:16.816276046Z" level=info msg="TearDown network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\" successfully" Sep 12 17:40:16.816431 containerd[1439]: time="2025-09-12T17:40:16.816414686Z" level=info msg="StopPodSandbox for \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\" returns successfully" Sep 12 17:40:16.821557 containerd[1439]: time="2025-09-12T17:40:16.817487644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-726gn,Uid:3a4561d4-7536-4a44-be2c-afac80d7a063,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:16.819256 systemd[1]: run-netns-cni\x2d8357fded\x2d0443\x2d7b6d\x2d4d9d\x2d775655fc6cdd.mount: Deactivated successfully. Sep 12 17:40:16.835305 systemd[1]: Started cri-containerd-1917681da0fb913a3196e3466306df0eef6fd8c257159ad800e266521138381b.scope - libcontainer container 1917681da0fb913a3196e3466306df0eef6fd8c257159ad800e266521138381b. Sep 12 17:40:16.868424 kubelet[2491]: I0912 17:40:16.868392 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:16.869012 kubelet[2491]: E0912 17:40:16.868988 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:16.869502 kubelet[2491]: I0912 17:40:16.869475 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:16.869607 kubelet[2491]: E0912 17:40:16.869570 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:16.870704 systemd-networkd[1382]: califa7dd627fd3: Gained IPv6LL Sep 12 17:40:16.895558 containerd[1439]: time="2025-09-12T17:40:16.894656953Z" level=info msg="StartContainer for \"1917681da0fb913a3196e3466306df0eef6fd8c257159ad800e266521138381b\" returns successfully" Sep 12 17:40:17.000252 systemd-networkd[1382]: cali90f41b618de: Link UP Sep 12 17:40:17.001746 systemd-networkd[1382]: cali90f41b618de: Gained carrier Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.878 [INFO][5133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.894 [INFO][5133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--726gn-eth0 goldmane-54d579b49d- calico-system 3a4561d4-7536-4a44-be2c-afac80d7a063 1063 0 2025-09-12 17:39:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-726gn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali90f41b618de [] [] }} ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.894 [INFO][5133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.924 [INFO][5166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" HandleID="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.924 [INFO][5166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" HandleID="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-726gn", "timestamp":"2025-09-12 17:40:16.924116903 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.924 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.924 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.924 [INFO][5166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.935 [INFO][5166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.964 [INFO][5166] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.971 [INFO][5166] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.974 [INFO][5166] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.977 [INFO][5166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.977 [INFO][5166] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.981 [INFO][5166] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747 Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.984 [INFO][5166] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.992 [INFO][5166] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.992 [INFO][5166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" host="localhost" Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.992 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:17.016996 containerd[1439]: 2025-09-12 17:40:16.992 [INFO][5166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" HandleID="k8s-pod-network.dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:17.018675 containerd[1439]: 2025-09-12 17:40:16.995 [INFO][5133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--726gn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3a4561d4-7536-4a44-be2c-afac80d7a063", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-726gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90f41b618de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.018675 containerd[1439]: 2025-09-12 17:40:16.995 [INFO][5133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:17.018675 containerd[1439]: 2025-09-12 17:40:16.995 [INFO][5133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90f41b618de ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:17.018675 containerd[1439]: 2025-09-12 17:40:17.001 [INFO][5133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:17.018675 containerd[1439]: 2025-09-12 17:40:17.003 [INFO][5133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--726gn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3a4561d4-7536-4a44-be2c-afac80d7a063", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747", Pod:"goldmane-54d579b49d-726gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90f41b618de", MAC:"16:15:45:6c:60:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.018675 containerd[1439]: 2025-09-12 17:40:17.014 [INFO][5133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747" Namespace="calico-system" Pod="goldmane-54d579b49d-726gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:17.038345 containerd[1439]: time="2025-09-12T17:40:17.038265951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:17.038345 containerd[1439]: time="2025-09-12T17:40:17.038317511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:17.038345 containerd[1439]: time="2025-09-12T17:40:17.038328271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:17.038505 containerd[1439]: time="2025-09-12T17:40:17.038396271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:17.056685 systemd[1]: Started cri-containerd-dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747.scope - libcontainer container dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747. Sep 12 17:40:17.069760 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:40:17.088323 containerd[1439]: time="2025-09-12T17:40:17.088270627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-726gn,Uid:3a4561d4-7536-4a44-be2c-afac80d7a063,Namespace:calico-system,Attempt:1,} returns sandbox id \"dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747\"" Sep 12 17:40:17.893350 kubelet[2491]: E0912 17:40:17.893318 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:18.537534 containerd[1439]: time="2025-09-12T17:40:18.537488703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:18.538304 containerd[1439]: time="2025-09-12T17:40:18.538276821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:40:18.539208 containerd[1439]: time="2025-09-12T17:40:18.539155060Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:18.541704 containerd[1439]: time="2025-09-12T17:40:18.541513776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:18.542164 containerd[1439]: time="2025-09-12T17:40:18.542137495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.776856482s" Sep 12 17:40:18.542325 containerd[1439]: time="2025-09-12T17:40:18.542232295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:40:18.544117 containerd[1439]: time="2025-09-12T17:40:18.543719572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:40:18.566418 containerd[1439]: time="2025-09-12T17:40:18.566368295Z" level=info msg="CreateContainer within sandbox \"eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:40:18.678812 containerd[1439]: time="2025-09-12T17:40:18.678755230Z" level=info msg="CreateContainer within sandbox \"eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"629f2bb70942abcff0ce198999002cf0d15e02ab14822d6b419624a4e82a54bc\"" Sep 12 17:40:18.679406 containerd[1439]: time="2025-09-12T17:40:18.679374509Z" level=info msg="StartContainer for \"629f2bb70942abcff0ce198999002cf0d15e02ab14822d6b419624a4e82a54bc\"" Sep 12 17:40:18.717720 systemd[1]: Started cri-containerd-629f2bb70942abcff0ce198999002cf0d15e02ab14822d6b419624a4e82a54bc.scope - libcontainer container 629f2bb70942abcff0ce198999002cf0d15e02ab14822d6b419624a4e82a54bc. Sep 12 17:40:18.754592 containerd[1439]: time="2025-09-12T17:40:18.754527506Z" level=info msg="StartContainer for \"629f2bb70942abcff0ce198999002cf0d15e02ab14822d6b419624a4e82a54bc\" returns successfully" Sep 12 17:40:18.893652 kubelet[2491]: I0912 17:40:18.892239 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-855787cfcf-fvkd7" podStartSLOduration=22.551155952 podStartE2EDuration="25.8922216s" podCreationTimestamp="2025-09-12 17:39:53 +0000 UTC" firstStartedPulling="2025-09-12 17:40:15.202495085 +0000 UTC m=+45.623841254" lastFinishedPulling="2025-09-12 17:40:18.543560733 +0000 UTC m=+48.964906902" observedRunningTime="2025-09-12 17:40:18.888345646 +0000 UTC m=+49.309691815" watchObservedRunningTime="2025-09-12 17:40:18.8922216 +0000 UTC m=+49.313567729" Sep 12 17:40:18.982688 systemd-networkd[1382]: cali90f41b618de: Gained IPv6LL Sep 12 17:40:19.785927 containerd[1439]: time="2025-09-12T17:40:19.785887389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:19.787442 containerd[1439]: time="2025-09-12T17:40:19.787405347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:40:19.789647 containerd[1439]: time="2025-09-12T17:40:19.788606825Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:19.793728 containerd[1439]: time="2025-09-12T17:40:19.793614897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:19.795012 containerd[1439]: time="2025-09-12T17:40:19.794817975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.251062763s" Sep 12 17:40:19.795012 containerd[1439]: time="2025-09-12T17:40:19.794879855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:40:19.797025 containerd[1439]: time="2025-09-12T17:40:19.796977531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:40:19.801642 containerd[1439]: time="2025-09-12T17:40:19.801497284Z" level=info msg="CreateContainer within sandbox \"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:40:19.862797 containerd[1439]: time="2025-09-12T17:40:19.862734465Z" level=info msg="CreateContainer within sandbox \"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3fd1403773875b10aa5e792590707b64eff78e5011239b449a9277804344c392\"" Sep 12 17:40:19.883384 containerd[1439]: time="2025-09-12T17:40:19.883332031Z" level=info msg="StartContainer for \"3fd1403773875b10aa5e792590707b64eff78e5011239b449a9277804344c392\"" Sep 12 17:40:19.914721 systemd[1]: Started cri-containerd-3fd1403773875b10aa5e792590707b64eff78e5011239b449a9277804344c392.scope - libcontainer container 3fd1403773875b10aa5e792590707b64eff78e5011239b449a9277804344c392. Sep 12 17:40:19.943697 containerd[1439]: time="2025-09-12T17:40:19.943653734Z" level=info msg="StartContainer for \"3fd1403773875b10aa5e792590707b64eff78e5011239b449a9277804344c392\" returns successfully" Sep 12 17:40:20.349557 kubelet[2491]: I0912 17:40:20.349347 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:20.349909 kubelet[2491]: E0912 17:40:20.349695 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:20.746466 kubelet[2491]: I0912 17:40:20.746423 2491 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:40:20.749443 kubelet[2491]: I0912 17:40:20.749404 2491 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:40:20.890774 kubelet[2491]: E0912 17:40:20.890735 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:20.901184 kubelet[2491]: I0912 17:40:20.900353 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7pms6" podStartSLOduration=22.188703911 podStartE2EDuration="27.900337684s" podCreationTimestamp="2025-09-12 17:39:53 +0000 UTC" firstStartedPulling="2025-09-12 17:40:14.08449096 +0000 UTC m=+44.505837129" lastFinishedPulling="2025-09-12 17:40:19.796124733 +0000 UTC m=+50.217470902" observedRunningTime="2025-09-12 17:40:20.900055364 +0000 UTC m=+51.321401533" watchObservedRunningTime="2025-09-12 17:40:20.900337684 +0000 UTC m=+51.321683853" Sep 12 17:40:20.953470 systemd[1]: Started sshd@8-10.0.0.153:22-10.0.0.1:34726.service - OpenSSH per-connection server daemon (10.0.0.1:34726). Sep 12 17:40:21.022403 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 34726 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:21.024430 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:21.030391 systemd-logind[1420]: New session 9 of user core. Sep 12 17:40:21.035669 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:40:21.183589 kernel: bpftool[5500]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:40:21.223329 kubelet[2491]: I0912 17:40:21.222817 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:21.380691 sshd[5469]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:21.389628 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:40:21.390858 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:40:21.392101 systemd[1]: sshd@8-10.0.0.153:22-10.0.0.1:34726.service: Deactivated successfully. Sep 12 17:40:21.396855 systemd-logind[1420]: Removed session 9. Sep 12 17:40:21.450489 systemd-networkd[1382]: vxlan.calico: Link UP Sep 12 17:40:21.450501 systemd-networkd[1382]: vxlan.calico: Gained carrier Sep 12 17:40:22.314580 kubelet[2491]: I0912 17:40:22.314522 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:22.521733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569216332.mount: Deactivated successfully. Sep 12 17:40:22.896831 containerd[1439]: time="2025-09-12T17:40:22.896787110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:22.897401 containerd[1439]: time="2025-09-12T17:40:22.897370829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:40:22.898514 containerd[1439]: time="2025-09-12T17:40:22.898465628Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:22.916587 containerd[1439]: time="2025-09-12T17:40:22.915971560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:22.916699 containerd[1439]: time="2025-09-12T17:40:22.916661039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.119653428s" Sep 12 17:40:22.916699 containerd[1439]: time="2025-09-12T17:40:22.916688719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:40:22.922140 containerd[1439]: time="2025-09-12T17:40:22.922012151Z" level=info msg="CreateContainer within sandbox \"dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:40:22.937300 containerd[1439]: time="2025-09-12T17:40:22.937263487Z" level=info msg="CreateContainer within sandbox \"dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"40102d48d56c6cf25cd96b63e5a81bdb1f05886b25fd9a85a7b716cd0d7b2c29\"" Sep 12 17:40:22.938048 containerd[1439]: time="2025-09-12T17:40:22.937874646Z" level=info msg="StartContainer for \"40102d48d56c6cf25cd96b63e5a81bdb1f05886b25fd9a85a7b716cd0d7b2c29\"" Sep 12 17:40:22.968709 systemd[1]: Started cri-containerd-40102d48d56c6cf25cd96b63e5a81bdb1f05886b25fd9a85a7b716cd0d7b2c29.scope - libcontainer container 40102d48d56c6cf25cd96b63e5a81bdb1f05886b25fd9a85a7b716cd0d7b2c29. Sep 12 17:40:22.996387 containerd[1439]: time="2025-09-12T17:40:22.996288875Z" level=info msg="StartContainer for \"40102d48d56c6cf25cd96b63e5a81bdb1f05886b25fd9a85a7b716cd0d7b2c29\" returns successfully" Sep 12 17:40:23.141291 kubelet[2491]: I0912 17:40:23.141217 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:23.526767 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Sep 12 17:40:23.918032 kubelet[2491]: I0912 17:40:23.917605 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-726gn" podStartSLOduration=26.09191288 podStartE2EDuration="31.917588457s" podCreationTimestamp="2025-09-12 17:39:52 +0000 UTC" firstStartedPulling="2025-09-12 17:40:17.091833581 +0000 UTC m=+47.513179750" lastFinishedPulling="2025-09-12 17:40:22.917509198 +0000 UTC m=+53.338855327" observedRunningTime="2025-09-12 17:40:23.916959218 +0000 UTC m=+54.338305387" watchObservedRunningTime="2025-09-12 17:40:23.917588457 +0000 UTC m=+54.338934626" Sep 12 17:40:26.394166 systemd[1]: Started sshd@9-10.0.0.153:22-10.0.0.1:34738.service - OpenSSH per-connection server daemon (10.0.0.1:34738). Sep 12 17:40:26.451305 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 34738 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:26.452982 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:26.456827 systemd-logind[1420]: New session 10 of user core. Sep 12 17:40:26.464688 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:40:26.783784 sshd[5737]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:26.795310 systemd[1]: sshd@9-10.0.0.153:22-10.0.0.1:34738.service: Deactivated successfully. Sep 12 17:40:26.797027 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:40:26.798341 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:40:26.799666 systemd[1]: Started sshd@10-10.0.0.153:22-10.0.0.1:34754.service - OpenSSH per-connection server daemon (10.0.0.1:34754). Sep 12 17:40:26.800593 systemd-logind[1420]: Removed session 10. Sep 12 17:40:26.836657 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 34754 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:26.837828 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:26.841156 systemd-logind[1420]: New session 11 of user core. Sep 12 17:40:26.848674 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:40:27.081195 sshd[5760]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:27.089450 systemd[1]: sshd@10-10.0.0.153:22-10.0.0.1:34754.service: Deactivated successfully. Sep 12 17:40:27.091400 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:40:27.095395 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:40:27.102834 systemd[1]: Started sshd@11-10.0.0.153:22-10.0.0.1:34764.service - OpenSSH per-connection server daemon (10.0.0.1:34764). Sep 12 17:40:27.105214 systemd-logind[1420]: Removed session 11. Sep 12 17:40:27.140642 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 34764 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:27.141323 sshd[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:27.148700 systemd-logind[1420]: New session 12 of user core. Sep 12 17:40:27.152774 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:40:27.322993 sshd[5772]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:27.327451 systemd[1]: sshd@11-10.0.0.153:22-10.0.0.1:34764.service: Deactivated successfully. Sep 12 17:40:27.329322 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:40:27.330018 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:40:27.330826 systemd-logind[1420]: Removed session 12. Sep 12 17:40:29.665999 containerd[1439]: time="2025-09-12T17:40:29.665961021Z" level=info msg="StopPodSandbox for \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\"" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.735 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--726gn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3a4561d4-7536-4a44-be2c-afac80d7a063", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747", Pod:"goldmane-54d579b49d-726gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90f41b618de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.735 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.736 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" iface="eth0" netns="" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.736 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.736 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.755 [INFO][5811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.755 [INFO][5811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.755 [INFO][5811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.765 [WARNING][5811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.765 [INFO][5811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.767 [INFO][5811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.772170 containerd[1439]: 2025-09-12 17:40:29.769 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.772170 containerd[1439]: time="2025-09-12T17:40:29.772051787Z" level=info msg="TearDown network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\" successfully" Sep 12 17:40:29.772170 containerd[1439]: time="2025-09-12T17:40:29.772077147Z" level=info msg="StopPodSandbox for \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\" returns successfully" Sep 12 17:40:29.773103 containerd[1439]: time="2025-09-12T17:40:29.772817746Z" level=info msg="RemovePodSandbox for \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\"" Sep 12 17:40:29.775149 containerd[1439]: time="2025-09-12T17:40:29.775070662Z" level=info msg="Forcibly stopping sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\"" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.807 [WARNING][5828] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--726gn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"3a4561d4-7536-4a44-be2c-afac80d7a063", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd143f02593604ccb1649a7f3c1bf23f4112e3a5c29dc6aff72cca2e1405d747", Pod:"goldmane-54d579b49d-726gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90f41b618de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.807 [INFO][5828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.807 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" iface="eth0" netns="" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.807 [INFO][5828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.807 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.824 [INFO][5837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.824 [INFO][5837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.824 [INFO][5837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.834 [WARNING][5837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.834 [INFO][5837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" HandleID="k8s-pod-network.88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Workload="localhost-k8s-goldmane--54d579b49d--726gn-eth0" Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.835 [INFO][5837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.838829 containerd[1439]: 2025-09-12 17:40:29.837 [INFO][5828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426" Sep 12 17:40:29.839320 containerd[1439]: time="2025-09-12T17:40:29.838871850Z" level=info msg="TearDown network for sandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\" successfully" Sep 12 17:40:29.852677 containerd[1439]: time="2025-09-12T17:40:29.852630510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:29.852778 containerd[1439]: time="2025-09-12T17:40:29.852718270Z" level=info msg="RemovePodSandbox \"88ba3bc06a0aa6b0056e2e2cad73e258aa2e8ae073adba496f6b6eaab171e426\" returns successfully" Sep 12 17:40:29.853577 containerd[1439]: time="2025-09-12T17:40:29.853224789Z" level=info msg="StopPodSandbox for \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\"" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.884 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0", GenerateName:"calico-kube-controllers-855787cfcf-", Namespace:"calico-system", SelfLink:"", UID:"1546fa5b-41ca-4825-9d46-eec66efc2e80", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855787cfcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094", Pod:"calico-kube-controllers-855787cfcf-fvkd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08c1d0c86a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.886 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.886 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" iface="eth0" netns="" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.886 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.886 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.908 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.909 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.909 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.918 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.918 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.920 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.924113 containerd[1439]: 2025-09-12 17:40:29.922 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.924113 containerd[1439]: time="2025-09-12T17:40:29.924090766Z" level=info msg="TearDown network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\" successfully" Sep 12 17:40:29.924113 containerd[1439]: time="2025-09-12T17:40:29.924116246Z" level=info msg="StopPodSandbox for \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\" returns successfully" Sep 12 17:40:29.924639 containerd[1439]: time="2025-09-12T17:40:29.924485565Z" level=info msg="RemovePodSandbox for \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\"" Sep 12 17:40:29.924639 containerd[1439]: time="2025-09-12T17:40:29.924515925Z" level=info msg="Forcibly stopping sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\"" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.958 [WARNING][5881] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0", GenerateName:"calico-kube-controllers-855787cfcf-", Namespace:"calico-system", SelfLink:"", UID:"1546fa5b-41ca-4825-9d46-eec66efc2e80", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855787cfcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb40ead9338b463b5a9bb928682a4ef12a9cf8c9d32ade299e099d1425332094", Pod:"calico-kube-controllers-855787cfcf-fvkd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08c1d0c86a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.958 [INFO][5881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.958 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" iface="eth0" netns="" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.958 [INFO][5881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.958 [INFO][5881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.976 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.976 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.976 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.984 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.984 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" HandleID="k8s-pod-network.b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Workload="localhost-k8s-calico--kube--controllers--855787cfcf--fvkd7-eth0" Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.985 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.988997 containerd[1439]: 2025-09-12 17:40:29.987 [INFO][5881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef" Sep 12 17:40:29.989386 containerd[1439]: time="2025-09-12T17:40:29.989035752Z" level=info msg="TearDown network for sandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\" successfully" Sep 12 17:40:29.991864 containerd[1439]: time="2025-09-12T17:40:29.991834748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:29.991928 containerd[1439]: time="2025-09-12T17:40:29.991906107Z" level=info msg="RemovePodSandbox \"b2c975f39ed11ecea6fc7f46941bca3f216b2cf10181835be7681803ddf650ef\" returns successfully" Sep 12 17:40:29.992455 containerd[1439]: time="2025-09-12T17:40:29.992428507Z" level=info msg="StopPodSandbox for \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\"" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.023 [WARNING][5909] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" WorkloadEndpoint="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.023 [INFO][5909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.023 [INFO][5909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" iface="eth0" netns="" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.023 [INFO][5909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.023 [INFO][5909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.041 [INFO][5918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.041 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.041 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.051 [WARNING][5918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.051 [INFO][5918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.052 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.055864 containerd[1439]: 2025-09-12 17:40:30.054 [INFO][5909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.056326 containerd[1439]: time="2025-09-12T17:40:30.055918255Z" level=info msg="TearDown network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\" successfully" Sep 12 17:40:30.056326 containerd[1439]: time="2025-09-12T17:40:30.055945855Z" level=info msg="StopPodSandbox for \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\" returns successfully" Sep 12 17:40:30.056663 containerd[1439]: time="2025-09-12T17:40:30.056640174Z" level=info msg="RemovePodSandbox for \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\"" Sep 12 17:40:30.056710 containerd[1439]: time="2025-09-12T17:40:30.056671174Z" level=info msg="Forcibly stopping sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\"" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.087 [WARNING][5936] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" WorkloadEndpoint="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.087 [INFO][5936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.087 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" iface="eth0" netns="" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.087 [INFO][5936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.087 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.104 [INFO][5945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.104 [INFO][5945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.104 [INFO][5945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.113 [WARNING][5945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.113 [INFO][5945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" HandleID="k8s-pod-network.c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Workload="localhost-k8s-whisker--86877d4794--4zh8m-eth0" Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.115 [INFO][5945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.118618 containerd[1439]: 2025-09-12 17:40:30.117 [INFO][5936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52" Sep 12 17:40:30.118987 containerd[1439]: time="2025-09-12T17:40:30.118669165Z" level=info msg="TearDown network for sandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\" successfully" Sep 12 17:40:30.145681 containerd[1439]: time="2025-09-12T17:40:30.145580166Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.145885 containerd[1439]: time="2025-09-12T17:40:30.145735366Z" level=info msg="RemovePodSandbox \"c210926ce0ad8d7d54c6ce0af357082424b3442535d569e11e793ef676d6cb52\" returns successfully" Sep 12 17:40:30.146278 containerd[1439]: time="2025-09-12T17:40:30.146246605Z" level=info msg="StopPodSandbox for \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\"" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.181 [WARNING][5963] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hdkml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a8ad9ed-e732-4605-8bb5-90e319ea4f13", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc", Pod:"coredns-674b8bbfcf-hdkml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia199acd65d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.181 [INFO][5963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.181 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" iface="eth0" netns="" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.181 [INFO][5963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.181 [INFO][5963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.198 [INFO][5972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.198 [INFO][5972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.198 [INFO][5972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.209 [WARNING][5972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.209 [INFO][5972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.210 [INFO][5972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.214063 containerd[1439]: 2025-09-12 17:40:30.212 [INFO][5963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.214063 containerd[1439]: time="2025-09-12T17:40:30.214034027Z" level=info msg="TearDown network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\" successfully" Sep 12 17:40:30.215264 containerd[1439]: time="2025-09-12T17:40:30.214067307Z" level=info msg="StopPodSandbox for \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\" returns successfully" Sep 12 17:40:30.215264 containerd[1439]: time="2025-09-12T17:40:30.214522867Z" level=info msg="RemovePodSandbox for \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\"" Sep 12 17:40:30.215264 containerd[1439]: time="2025-09-12T17:40:30.214572147Z" level=info msg="Forcibly stopping sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\"" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.248 [WARNING][5991] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hdkml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a8ad9ed-e732-4605-8bb5-90e319ea4f13", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13168acd9412ddf83d8a51ae958646f882b9178603b6c95f8995c978b4867bcc", Pod:"coredns-674b8bbfcf-hdkml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia199acd65d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.249 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.249 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" iface="eth0" netns="" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.249 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.249 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.268 [INFO][6000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.268 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.268 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.277 [WARNING][6000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.277 [INFO][6000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" HandleID="k8s-pod-network.dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Workload="localhost-k8s-coredns--674b8bbfcf--hdkml-eth0" Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.279 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.282489 containerd[1439]: 2025-09-12 17:40:30.280 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f" Sep 12 17:40:30.282903 containerd[1439]: time="2025-09-12T17:40:30.282519369Z" level=info msg="TearDown network for sandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\" successfully" Sep 12 17:40:30.290940 containerd[1439]: time="2025-09-12T17:40:30.290900917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.290996 containerd[1439]: time="2025-09-12T17:40:30.290967437Z" level=info msg="RemovePodSandbox \"dc777929450862cfd23e0a8a973a3b5e79d84e18e1b0a434cb2d51aafe79671f\" returns successfully" Sep 12 17:40:30.291437 containerd[1439]: time="2025-09-12T17:40:30.291397156Z" level=info msg="StopPodSandbox for \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\"" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.322 [WARNING][6018] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qgttl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb", Pod:"coredns-674b8bbfcf-qgttl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7dd627fd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.322 [INFO][6018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.322 [INFO][6018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" iface="eth0" netns="" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.322 [INFO][6018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.322 [INFO][6018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.344 [INFO][6027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.344 [INFO][6027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.344 [INFO][6027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.353 [WARNING][6027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.353 [INFO][6027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.354 [INFO][6027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.357916 containerd[1439]: 2025-09-12 17:40:30.356 [INFO][6018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.358957 containerd[1439]: time="2025-09-12T17:40:30.357953900Z" level=info msg="TearDown network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\" successfully" Sep 12 17:40:30.358957 containerd[1439]: time="2025-09-12T17:40:30.357977900Z" level=info msg="StopPodSandbox for \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\" returns successfully" Sep 12 17:40:30.358957 containerd[1439]: time="2025-09-12T17:40:30.358373380Z" level=info msg="RemovePodSandbox for \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\"" Sep 12 17:40:30.358957 containerd[1439]: time="2025-09-12T17:40:30.358402979Z" level=info msg="Forcibly stopping sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\"" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.391 [WARNING][6045] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qgttl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78c55e80-8b1c-48d6-a5e7-2c7abb2426e4", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a55be5dc9e447607603a015a0b2e479ee15340461ac48ba57a1b453cda36ebb", Pod:"coredns-674b8bbfcf-qgttl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7dd627fd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.391 [INFO][6045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.391 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" iface="eth0" netns="" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.391 [INFO][6045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.391 [INFO][6045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.408 [INFO][6053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.408 [INFO][6053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.408 [INFO][6053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.417 [WARNING][6053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.417 [INFO][6053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" HandleID="k8s-pod-network.bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Workload="localhost-k8s-coredns--674b8bbfcf--qgttl-eth0" Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.418 [INFO][6053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.422317 containerd[1439]: 2025-09-12 17:40:30.420 [INFO][6045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12" Sep 12 17:40:30.422729 containerd[1439]: time="2025-09-12T17:40:30.422351647Z" level=info msg="TearDown network for sandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\" successfully" Sep 12 17:40:30.434939 containerd[1439]: time="2025-09-12T17:40:30.434872789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.435016 containerd[1439]: time="2025-09-12T17:40:30.434951309Z" level=info msg="RemovePodSandbox \"bdbac45e6258d2533a1e55b31135fd528075e542d9a713b83988db2fe0334a12\" returns successfully" Sep 12 17:40:30.435422 containerd[1439]: time="2025-09-12T17:40:30.435379949Z" level=info msg="StopPodSandbox for \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\"" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.469 [WARNING][6071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"249c9c72-dc41-4c2e-9f20-c59454807552", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4", Pod:"calico-apiserver-bc44ff76c-zs9w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5e9517ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.470 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.470 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" iface="eth0" netns="" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.470 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.470 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.487 [INFO][6079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.487 [INFO][6079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.487 [INFO][6079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.496 [WARNING][6079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.496 [INFO][6079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.498 [INFO][6079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.501966 containerd[1439]: 2025-09-12 17:40:30.500 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.502368 containerd[1439]: time="2025-09-12T17:40:30.502008453Z" level=info msg="TearDown network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\" successfully" Sep 12 17:40:30.502368 containerd[1439]: time="2025-09-12T17:40:30.502032933Z" level=info msg="StopPodSandbox for \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\" returns successfully" Sep 12 17:40:30.502504 containerd[1439]: time="2025-09-12T17:40:30.502481252Z" level=info msg="RemovePodSandbox for \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\"" Sep 12 17:40:30.502779 containerd[1439]: time="2025-09-12T17:40:30.502514852Z" level=info msg="Forcibly stopping sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\"" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.533 [WARNING][6097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"249c9c72-dc41-4c2e-9f20-c59454807552", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7dd21bc4594f71b64c162d4bf41378b2fbfe0d99aa862bc7b090bba9aa8d2f4", Pod:"calico-apiserver-bc44ff76c-zs9w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5e9517ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.533 [INFO][6097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.533 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" iface="eth0" netns="" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.533 [INFO][6097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.533 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.551 [INFO][6106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.552 [INFO][6106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.552 [INFO][6106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.560 [WARNING][6106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.560 [INFO][6106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" HandleID="k8s-pod-network.2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Workload="localhost-k8s-calico--apiserver--bc44ff76c--zs9w2-eth0" Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.561 [INFO][6106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.565125 containerd[1439]: 2025-09-12 17:40:30.563 [INFO][6097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa" Sep 12 17:40:30.565499 containerd[1439]: time="2025-09-12T17:40:30.565162402Z" level=info msg="TearDown network for sandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\" successfully" Sep 12 17:40:30.570812 containerd[1439]: time="2025-09-12T17:40:30.570763914Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.570902 containerd[1439]: time="2025-09-12T17:40:30.570828234Z" level=info msg="RemovePodSandbox \"2fd3bd9a197e6c61185b626c6d75675a6acd16255dcf5f51b85a33cfa59df4aa\" returns successfully" Sep 12 17:40:30.571508 containerd[1439]: time="2025-09-12T17:40:30.571250873Z" level=info msg="StopPodSandbox for \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\"" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.602 [WARNING][6124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pms6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e93698b-a189-45ca-894b-4585a51c5842", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6", Pod:"csi-node-driver-7pms6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali350b959255e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.602 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.602 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" iface="eth0" netns="" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.602 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.602 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.621 [INFO][6132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.621 [INFO][6132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.621 [INFO][6132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.630 [WARNING][6132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.630 [INFO][6132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.631 [INFO][6132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.635075 containerd[1439]: 2025-09-12 17:40:30.633 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.635666 containerd[1439]: time="2025-09-12T17:40:30.635517460Z" level=info msg="TearDown network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\" successfully" Sep 12 17:40:30.635666 containerd[1439]: time="2025-09-12T17:40:30.635573220Z" level=info msg="StopPodSandbox for \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\" returns successfully" Sep 12 17:40:30.636039 containerd[1439]: time="2025-09-12T17:40:30.636018820Z" level=info msg="RemovePodSandbox for \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\"" Sep 12 17:40:30.636362 containerd[1439]: time="2025-09-12T17:40:30.636120819Z" level=info msg="Forcibly stopping sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\"" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.669 [WARNING][6151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pms6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e93698b-a189-45ca-894b-4585a51c5842", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a92f8d55051e499379bf6430bd5e862a7c80c2c4b77b7342e813e818295820b6", Pod:"csi-node-driver-7pms6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali350b959255e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.669 [INFO][6151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.669 [INFO][6151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" iface="eth0" netns="" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.669 [INFO][6151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.669 [INFO][6151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.686 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.686 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.686 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.699 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.699 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" HandleID="k8s-pod-network.35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Workload="localhost-k8s-csi--node--driver--7pms6-eth0" Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.701 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.705056 containerd[1439]: 2025-09-12 17:40:30.703 [INFO][6151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd" Sep 12 17:40:30.707262 containerd[1439]: time="2025-09-12T17:40:30.705356960Z" level=info msg="TearDown network for sandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\" successfully" Sep 12 17:40:30.708740 containerd[1439]: time="2025-09-12T17:40:30.708706595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.708896 containerd[1439]: time="2025-09-12T17:40:30.708865555Z" level=info msg="RemovePodSandbox \"35dd759d3ea3db130bdcdcd6b6505314293cedc52e528b39bac09e05633d6cfd\" returns successfully" Sep 12 17:40:30.709365 containerd[1439]: time="2025-09-12T17:40:30.709337834Z" level=info msg="StopPodSandbox for \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\"" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.742 [WARNING][6178] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc49e149-a094-4d79-a8c7-27e8dff370b3", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0", Pod:"calico-apiserver-bc44ff76c-46zh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic40effe1a98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.743 [INFO][6178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.743 [INFO][6178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" iface="eth0" netns="" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.743 [INFO][6178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.743 [INFO][6178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.759 [INFO][6187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.760 [INFO][6187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.760 [INFO][6187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.769 [WARNING][6187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.769 [INFO][6187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.771 [INFO][6187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.774792 containerd[1439]: 2025-09-12 17:40:30.773 [INFO][6178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.775789 containerd[1439]: time="2025-09-12T17:40:30.774773860Z" level=info msg="TearDown network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\" successfully" Sep 12 17:40:30.775789 containerd[1439]: time="2025-09-12T17:40:30.775611059Z" level=info msg="StopPodSandbox for \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\" returns successfully" Sep 12 17:40:30.776115 containerd[1439]: time="2025-09-12T17:40:30.776087058Z" level=info msg="RemovePodSandbox for \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\"" Sep 12 17:40:30.776179 containerd[1439]: time="2025-09-12T17:40:30.776125498Z" level=info msg="Forcibly stopping sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\"" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.807 [WARNING][6205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0", GenerateName:"calico-apiserver-bc44ff76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc49e149-a094-4d79-a8c7-27e8dff370b3", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc44ff76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86f8f64925724b53cb4240faab8702f2d1bdf8e53a140145f38546ad724e3dd0", Pod:"calico-apiserver-bc44ff76c-46zh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic40effe1a98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.807 [INFO][6205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.807 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" iface="eth0" netns="" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.807 [INFO][6205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.807 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.824 [INFO][6214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.824 [INFO][6214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.824 [INFO][6214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.834 [WARNING][6214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.834 [INFO][6214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" HandleID="k8s-pod-network.79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Workload="localhost-k8s-calico--apiserver--bc44ff76c--46zh9-eth0" Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.835 [INFO][6214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.839720 containerd[1439]: 2025-09-12 17:40:30.838 [INFO][6205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156" Sep 12 17:40:30.840103 containerd[1439]: time="2025-09-12T17:40:30.839749646Z" level=info msg="TearDown network for sandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\" successfully" Sep 12 17:40:30.843402 containerd[1439]: time="2025-09-12T17:40:30.843371081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.843500 containerd[1439]: time="2025-09-12T17:40:30.843432841Z" level=info msg="RemovePodSandbox \"79146bdbaaf5a3c2733a866df6fe58c91e48d01256c25bf95e32b1755921a156\" returns successfully" Sep 12 17:40:32.338120 systemd[1]: Started sshd@12-10.0.0.153:22-10.0.0.1:35584.service - OpenSSH per-connection server daemon (10.0.0.1:35584). Sep 12 17:40:32.380331 sshd[6231]: Accepted publickey for core from 10.0.0.1 port 35584 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:32.381473 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:32.385353 systemd-logind[1420]: New session 13 of user core. Sep 12 17:40:32.391675 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:40:32.527738 sshd[6231]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:32.536928 systemd[1]: sshd@12-10.0.0.153:22-10.0.0.1:35584.service: Deactivated successfully. Sep 12 17:40:32.538600 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:40:32.540030 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:40:32.541592 systemd[1]: Started sshd@13-10.0.0.153:22-10.0.0.1:35586.service - OpenSSH per-connection server daemon (10.0.0.1:35586). Sep 12 17:40:32.542458 systemd-logind[1420]: Removed session 13. Sep 12 17:40:32.577821 sshd[6245]: Accepted publickey for core from 10.0.0.1 port 35586 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:32.579176 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:32.582928 systemd-logind[1420]: New session 14 of user core. Sep 12 17:40:32.588730 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:40:32.780247 sshd[6245]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:32.788078 systemd[1]: sshd@13-10.0.0.153:22-10.0.0.1:35586.service: Deactivated successfully. Sep 12 17:40:32.789733 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:40:32.791016 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:40:32.792244 systemd[1]: Started sshd@14-10.0.0.153:22-10.0.0.1:35588.service - OpenSSH per-connection server daemon (10.0.0.1:35588). Sep 12 17:40:32.793032 systemd-logind[1420]: Removed session 14. Sep 12 17:40:32.853325 sshd[6258]: Accepted publickey for core from 10.0.0.1 port 35588 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:32.855060 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:32.859130 systemd-logind[1420]: New session 15 of user core. Sep 12 17:40:32.867736 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:40:33.460300 sshd[6258]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:33.470285 systemd[1]: sshd@14-10.0.0.153:22-10.0.0.1:35588.service: Deactivated successfully. Sep 12 17:40:33.473831 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:40:33.477551 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:40:33.484054 systemd[1]: Started sshd@15-10.0.0.153:22-10.0.0.1:35596.service - OpenSSH per-connection server daemon (10.0.0.1:35596). Sep 12 17:40:33.486171 systemd-logind[1420]: Removed session 15. Sep 12 17:40:33.520325 sshd[6279]: Accepted publickey for core from 10.0.0.1 port 35596 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:33.521490 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:33.525275 systemd-logind[1420]: New session 16 of user core. Sep 12 17:40:33.538766 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:40:34.013318 sshd[6279]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:34.024474 systemd[1]: sshd@15-10.0.0.153:22-10.0.0.1:35596.service: Deactivated successfully. Sep 12 17:40:34.025891 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:40:34.028583 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:40:34.034883 systemd[1]: Started sshd@16-10.0.0.153:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). Sep 12 17:40:34.038002 systemd-logind[1420]: Removed session 16. Sep 12 17:40:34.069930 sshd[6293]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:34.071146 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:34.074646 systemd-logind[1420]: New session 17 of user core. Sep 12 17:40:34.085723 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:40:34.212234 sshd[6293]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:34.215608 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:40:34.216176 systemd[1]: sshd@16-10.0.0.153:22-10.0.0.1:35604.service: Deactivated successfully. Sep 12 17:40:34.218348 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:40:34.219951 systemd-logind[1420]: Removed session 17. Sep 12 17:40:37.674045 kubelet[2491]: E0912 17:40:37.673955 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:40:39.223430 systemd[1]: Started sshd@17-10.0.0.153:22-10.0.0.1:35606.service - OpenSSH per-connection server daemon (10.0.0.1:35606). Sep 12 17:40:39.268212 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 35606 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:39.269402 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:39.273214 systemd-logind[1420]: New session 18 of user core. Sep 12 17:40:39.282682 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:40:39.421072 sshd[6334]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:39.425157 systemd[1]: sshd@17-10.0.0.153:22-10.0.0.1:35606.service: Deactivated successfully. Sep 12 17:40:39.426993 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:40:39.427565 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:40:39.428464 systemd-logind[1420]: Removed session 18. Sep 12 17:40:44.432270 systemd[1]: Started sshd@18-10.0.0.153:22-10.0.0.1:37980.service - OpenSSH per-connection server daemon (10.0.0.1:37980). Sep 12 17:40:44.472202 sshd[6378]: Accepted publickey for core from 10.0.0.1 port 37980 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:40:44.473437 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:44.477297 systemd-logind[1420]: New session 19 of user core. Sep 12 17:40:44.487690 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:40:44.648721 sshd[6378]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:44.651911 systemd[1]: sshd@18-10.0.0.153:22-10.0.0.1:37980.service: Deactivated successfully. Sep 12 17:40:44.655068 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:40:44.655612 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:40:44.656412 systemd-logind[1420]: Removed session 19.