Sep 12 17:29:49.891542 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:29:49.891567 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:29:49.891577 kernel: KASLR enabled Sep 12 17:29:49.891583 kernel: efi: EFI v2.7 by EDK II Sep 12 17:29:49.891589 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 12 17:29:49.891595 kernel: random: crng init done Sep 12 17:29:49.891602 kernel: ACPI: Early table checksum verification disabled Sep 12 17:29:49.891608 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 12 17:29:49.891614 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:29:49.891622 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891629 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891635 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891641 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891647 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891655 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891663 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891669 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891676 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:49.891682 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:29:49.891688 kernel: NUMA: Failed to initialise from firmware Sep 12 17:29:49.891695 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:29:49.891701 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 12 17:29:49.891707 kernel: Zone ranges: Sep 12 17:29:49.891714 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:29:49.891720 kernel: DMA32 empty Sep 12 17:29:49.891728 kernel: Normal empty Sep 12 17:29:49.891734 kernel: Movable zone start for each node Sep 12 17:29:49.891741 kernel: Early memory node ranges Sep 12 17:29:49.891747 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 12 17:29:49.891753 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 12 17:29:49.891760 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 12 17:29:49.891777 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 17:29:49.891785 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 17:29:49.891791 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 17:29:49.891803 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:29:49.891810 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:29:49.891817 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:29:49.891826 kernel: psci: probing for conduit method from ACPI. Sep 12 17:29:49.891832 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:29:49.891839 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:29:49.891848 kernel: psci: Trusted OS migration not required Sep 12 17:29:49.891855 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:29:49.891863 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:29:49.891871 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:29:49.891878 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:29:49.891885 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:29:49.891893 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:29:49.891900 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:29:49.891907 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:29:49.891913 kernel: CPU features: detected: Spectre-v4 Sep 12 17:29:49.891920 kernel: CPU features: detected: Spectre-BHB Sep 12 17:29:49.891927 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:29:49.891934 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:29:49.891942 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:29:49.891949 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:29:49.891955 kernel: alternatives: applying boot alternatives Sep 12 17:29:49.891963 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:29:49.891970 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:29:49.891977 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:29:49.891984 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:29:49.891990 kernel: Fallback order for Node 0: 0 Sep 12 17:29:49.891997 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 12 17:29:49.892003 kernel: Policy zone: DMA Sep 12 17:29:49.892010 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:29:49.892018 kernel: software IO TLB: area num 4. Sep 12 17:29:49.892025 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 12 17:29:49.892032 kernel: Memory: 2386340K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185948K reserved, 0K cma-reserved) Sep 12 17:29:49.892039 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:29:49.892046 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:29:49.892054 kernel: rcu: RCU event tracing is enabled. Sep 12 17:29:49.892061 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:29:49.892068 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:29:49.892075 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:29:49.892082 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:29:49.892089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:29:49.892098 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:29:49.892105 kernel: GICv3: 256 SPIs implemented Sep 12 17:29:49.892112 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:29:49.892119 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:29:49.892126 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:29:49.892132 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:29:49.892139 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:29:49.892146 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:29:49.892153 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:29:49.892159 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 12 17:29:49.892166 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 12 17:29:49.892173 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:29:49.892182 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:49.892189 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:29:49.892196 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:29:49.892203 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:29:49.892209 kernel: arm-pv: using stolen time PV Sep 12 17:29:49.892217 kernel: Console: colour dummy device 80x25 Sep 12 17:29:49.892223 kernel: ACPI: Core revision 20230628 Sep 12 17:29:49.892231 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:29:49.892238 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:29:49.892246 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:29:49.892255 kernel: landlock: Up and running. Sep 12 17:29:49.892261 kernel: SELinux: Initializing. Sep 12 17:29:49.892268 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:29:49.892275 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:29:49.892283 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:29:49.892290 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:29:49.892297 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:29:49.892304 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:29:49.892311 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 17:29:49.892320 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 17:29:49.892327 kernel: Remapping and enabling EFI services. Sep 12 17:29:49.892337 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:29:49.892344 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:29:49.892351 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:29:49.892358 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 12 17:29:49.892365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:49.892372 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:29:49.892379 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:29:49.892387 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:29:49.892396 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 12 17:29:49.892404 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:49.892416 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:29:49.892425 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:29:49.892433 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:29:49.892440 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 12 17:29:49.892448 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:49.892455 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:29:49.892463 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:29:49.892472 kernel: SMP: Total of 4 processors activated. Sep 12 17:29:49.892479 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:29:49.892487 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:29:49.892494 kernel: CPU features: detected: Common not Private translations Sep 12 17:29:49.892501 kernel: CPU features: detected: CRC32 instructions Sep 12 17:29:49.892509 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:29:49.892516 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:29:49.892523 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:29:49.892532 kernel: CPU features: detected: Privileged Access Never Sep 12 17:29:49.892539 kernel: CPU features: detected: RAS Extension Support Sep 12 17:29:49.892546 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:29:49.892553 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:29:49.892561 kernel: alternatives: applying system-wide alternatives Sep 12 17:29:49.892568 kernel: devtmpfs: initialized Sep 12 17:29:49.892575 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:29:49.892583 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:29:49.892590 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:29:49.892600 kernel: SMBIOS 3.0.0 present. Sep 12 17:29:49.892607 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 12 17:29:49.892614 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:29:49.892621 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:29:49.892629 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:29:49.892637 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:29:49.892644 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:29:49.892652 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 12 17:29:49.892659 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:29:49.892668 kernel: cpuidle: using governor menu Sep 12 17:29:49.892676 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:29:49.892683 kernel: ASID allocator initialised with 32768 entries Sep 12 17:29:49.892691 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:29:49.892699 kernel: Serial: AMBA PL011 UART driver Sep 12 17:29:49.892707 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:29:49.892714 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:29:49.892722 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:29:49.892729 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:29:49.892738 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:29:49.892747 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:29:49.892754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:29:49.892766 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:29:49.892774 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:29:49.892782 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:29:49.892789 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:29:49.892797 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:29:49.892809 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:29:49.892818 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:29:49.892826 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:29:49.892833 kernel: ACPI: Interpreter enabled Sep 12 17:29:49.892841 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:29:49.892848 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:29:49.892856 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:29:49.892863 kernel: printk: console [ttyAMA0] enabled Sep 12 17:29:49.892871 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:29:49.893028 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:29:49.893110 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:29:49.893178 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:29:49.893245 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:29:49.893309 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:29:49.893320 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:29:49.893327 kernel: PCI host bridge to bus 0000:00 Sep 12 17:29:49.893400 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:29:49.893465 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:29:49.893542 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:29:49.893606 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:29:49.893690 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 17:29:49.893780 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:29:49.893969 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 12 17:29:49.894052 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 12 17:29:49.894123 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:29:49.894190 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:29:49.894259 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 12 17:29:49.894328 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 12 17:29:49.894392 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:29:49.894455 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:29:49.894531 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:29:49.894542 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:29:49.894550 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:29:49.894558 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:29:49.894566 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:29:49.894576 kernel: iommu: Default domain type: Translated Sep 12 17:29:49.894586 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:29:49.894597 kernel: efivars: Registered efivars operations Sep 12 17:29:49.894607 kernel: vgaarb: loaded Sep 12 17:29:49.894617 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:29:49.894625 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:29:49.894632 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:29:49.894640 kernel: pnp: PnP ACPI init Sep 12 17:29:49.894730 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:29:49.894742 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:29:49.894749 kernel: NET: Registered PF_INET protocol family Sep 12 17:29:49.894757 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:29:49.894775 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:29:49.894782 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:29:49.894790 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:29:49.894834 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:29:49.894843 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:29:49.894851 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:29:49.894858 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:29:49.894865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:29:49.894873 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:29:49.894883 kernel: kvm [1]: HYP mode not available Sep 12 17:29:49.894890 kernel: Initialise system trusted keyrings Sep 12 17:29:49.894897 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:29:49.894905 kernel: Key type asymmetric registered Sep 12 17:29:49.894912 kernel: Asymmetric key parser 'x509' registered Sep 12 17:29:49.894920 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:29:49.894927 kernel: io scheduler mq-deadline registered Sep 12 17:29:49.894934 kernel: io scheduler kyber registered Sep 12 17:29:49.894941 kernel: io scheduler bfq registered Sep 12 17:29:49.894950 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:29:49.894957 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:29:49.894965 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:29:49.895042 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:29:49.895053 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:29:49.895060 kernel: thunder_xcv, ver 1.0 Sep 12 17:29:49.895068 kernel: thunder_bgx, ver 1.0 Sep 12 17:29:49.895075 kernel: nicpf, ver 1.0 Sep 12 17:29:49.895083 kernel: nicvf, ver 1.0 Sep 12 17:29:49.895161 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:29:49.895224 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:29:49 UTC (1757698189) Sep 12 17:29:49.895234 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:29:49.895242 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 17:29:49.895249 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:29:49.895257 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:29:49.895264 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:29:49.895273 kernel: Segment Routing with IPv6 Sep 12 17:29:49.895283 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:29:49.895290 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:29:49.895297 kernel: Key type dns_resolver registered Sep 12 17:29:49.895304 kernel: registered taskstats version 1 Sep 12 17:29:49.895312 kernel: Loading compiled-in X.509 certificates Sep 12 17:29:49.895319 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:29:49.895326 kernel: Key type .fscrypt registered Sep 12 17:29:49.895334 kernel: Key type fscrypt-provisioning registered Sep 12 17:29:49.895353 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:29:49.895362 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:29:49.895370 kernel: ima: No architecture policies found Sep 12 17:29:49.895377 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:29:49.895385 kernel: clk: Disabling unused clocks Sep 12 17:29:49.895392 kernel: Freeing unused kernel memory: 39488K Sep 12 17:29:49.895399 kernel: Run /init as init process Sep 12 17:29:49.895407 kernel: with arguments: Sep 12 17:29:49.895414 kernel: /init Sep 12 17:29:49.895421 kernel: with environment: Sep 12 17:29:49.895430 kernel: HOME=/ Sep 12 17:29:49.895437 kernel: TERM=linux Sep 12 17:29:49.895444 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:29:49.895454 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:29:49.895464 systemd[1]: Detected virtualization kvm. Sep 12 17:29:49.895473 systemd[1]: Detected architecture arm64. Sep 12 17:29:49.895480 systemd[1]: Running in initrd. Sep 12 17:29:49.895490 systemd[1]: No hostname configured, using default hostname. Sep 12 17:29:49.895498 systemd[1]: Hostname set to . Sep 12 17:29:49.895507 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:29:49.895515 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:29:49.895524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:29:49.895532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:29:49.895541 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:29:49.895550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:29:49.895559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:29:49.895573 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:29:49.895582 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:29:49.895591 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:29:49.895599 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:29:49.895607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:29:49.895616 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:29:49.895625 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:29:49.895633 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:29:49.895642 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:29:49.895649 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:29:49.895658 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:29:49.895666 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:29:49.895674 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:29:49.895682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:29:49.895690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:29:49.895699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:29:49.895709 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:29:49.895717 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:29:49.895725 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:29:49.895734 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:29:49.895741 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:29:49.895749 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:29:49.895757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:29:49.895776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:49.895785 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:29:49.895793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:29:49.895808 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:29:49.895839 systemd-journald[238]: Collecting audit messages is disabled. Sep 12 17:29:49.895862 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:29:49.895871 systemd-journald[238]: Journal started Sep 12 17:29:49.895892 systemd-journald[238]: Runtime Journal (/run/log/journal/9b3bced38dae49548736ca07f8b4b5c1) is 5.9M, max 47.3M, 41.4M free. Sep 12 17:29:49.890718 systemd-modules-load[239]: Inserted module 'overlay' Sep 12 17:29:49.898910 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:29:49.908825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:29:49.910235 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 12 17:29:49.911231 kernel: Bridge firewalling registered Sep 12 17:29:49.915479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:29:49.916990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:49.920848 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:29:49.944037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:29:49.946018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:29:49.949009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:29:49.953005 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:29:49.957682 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:29:49.961239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:29:49.965963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:29:49.981014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:29:49.982874 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:29:49.985230 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:29:49.999116 dracut-cmdline[277]: dracut-dracut-053 Sep 12 17:29:50.001735 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:29:50.005908 systemd-resolved[274]: Positive Trust Anchors: Sep 12 17:29:50.005920 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:29:50.005951 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:29:50.010741 systemd-resolved[274]: Defaulting to hostname 'linux'. Sep 12 17:29:50.011788 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:29:50.016445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:29:50.068838 kernel: SCSI subsystem initialized Sep 12 17:29:50.072817 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:29:50.080828 kernel: iscsi: registered transport (tcp) Sep 12 17:29:50.093833 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:29:50.093855 kernel: QLogic iSCSI HBA Driver Sep 12 17:29:50.137611 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:29:50.158028 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:29:50.173988 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:29:50.174057 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:29:50.175176 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:29:50.221844 kernel: raid6: neonx8 gen() 15725 MB/s Sep 12 17:29:50.238823 kernel: raid6: neonx4 gen() 15316 MB/s Sep 12 17:29:50.255824 kernel: raid6: neonx2 gen() 12906 MB/s Sep 12 17:29:50.272822 kernel: raid6: neonx1 gen() 10318 MB/s Sep 12 17:29:50.291830 kernel: raid6: int64x8 gen() 7785 MB/s Sep 12 17:29:50.308838 kernel: raid6: int64x4 gen() 7126 MB/s Sep 12 17:29:50.325827 kernel: raid6: int64x2 gen() 6109 MB/s Sep 12 17:29:50.343091 kernel: raid6: int64x1 gen() 5056 MB/s Sep 12 17:29:50.343119 kernel: raid6: using algorithm neonx8 gen() 15725 MB/s Sep 12 17:29:50.361087 kernel: raid6: .... xor() 11996 MB/s, rmw enabled Sep 12 17:29:50.361128 kernel: raid6: using neon recovery algorithm Sep 12 17:29:50.367948 kernel: xor: measuring software checksum speed Sep 12 17:29:50.368012 kernel: 8regs : 19716 MB/sec Sep 12 17:29:50.369388 kernel: 32regs : 19627 MB/sec Sep 12 17:29:50.369424 kernel: arm64_neon : 26778 MB/sec Sep 12 17:29:50.369445 kernel: xor: using function: arm64_neon (26778 MB/sec) Sep 12 17:29:50.419830 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:29:50.432676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:29:50.450068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:29:50.467265 systemd-udevd[458]: Using default interface naming scheme 'v255'. Sep 12 17:29:50.473104 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:29:50.489080 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:29:50.508392 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 12 17:29:50.548001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:29:50.560033 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:29:50.614591 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:29:50.621304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:29:50.637729 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:29:50.640217 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:29:50.643467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:29:50.644795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:29:50.651951 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:29:50.663834 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:29:50.664037 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:29:50.663687 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:29:50.672153 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:29:50.672207 kernel: GPT:9289727 != 19775487 Sep 12 17:29:50.672217 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:29:50.672226 kernel: GPT:9289727 != 19775487 Sep 12 17:29:50.673188 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:29:50.673205 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:50.675637 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:29:50.675766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:29:50.684051 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:29:50.685243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:29:50.685404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:50.688005 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:50.702474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:50.706365 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (521) Sep 12 17:29:50.706402 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (506) Sep 12 17:29:50.718159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:50.723875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:29:50.729450 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:29:50.733937 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:29:50.735250 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:29:50.742273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:29:50.755998 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:29:50.761051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:29:50.764925 disk-uuid[552]: Primary Header is updated. Sep 12 17:29:50.764925 disk-uuid[552]: Secondary Entries is updated. Sep 12 17:29:50.764925 disk-uuid[552]: Secondary Header is updated. Sep 12 17:29:50.770836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:50.773818 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:50.777822 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:50.785088 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:29:51.779161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:51.779649 disk-uuid[554]: The operation has completed successfully. Sep 12 17:29:51.822097 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:29:51.822934 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:29:51.851046 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:29:51.858061 sh[577]: Success Sep 12 17:29:51.869861 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:29:51.913092 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:29:51.924448 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:29:51.928277 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:29:51.939096 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:29:51.939156 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:51.939167 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:29:51.941055 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:29:51.941079 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:29:51.947494 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:29:51.949022 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:29:51.961051 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:29:51.962952 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:29:51.982983 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:29:51.983049 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:51.983061 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:29:51.988269 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:29:51.998951 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:29:52.000689 kernel: BTRFS info (device vda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:29:52.012541 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:29:52.021997 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:29:52.075561 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:29:52.089027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:29:52.094793 ignition[692]: Ignition 2.19.0 Sep 12 17:29:52.094814 ignition[692]: Stage: fetch-offline Sep 12 17:29:52.094864 ignition[692]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:52.094873 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:52.095066 ignition[692]: parsed url from cmdline: "" Sep 12 17:29:52.095070 ignition[692]: no config URL provided Sep 12 17:29:52.095075 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:29:52.095082 ignition[692]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:29:52.095106 ignition[692]: op(1): [started] loading QEMU firmware config module Sep 12 17:29:52.095110 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:29:52.103051 ignition[692]: op(1): [finished] loading QEMU firmware config module Sep 12 17:29:52.110572 systemd-networkd[767]: lo: Link UP Sep 12 17:29:52.110586 systemd-networkd[767]: lo: Gained carrier Sep 12 17:29:52.111617 systemd-networkd[767]: Enumeration completed Sep 12 17:29:52.111744 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:29:52.112311 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:52.112314 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:29:52.113589 systemd[1]: Reached target network.target - Network. Sep 12 17:29:52.115454 systemd-networkd[767]: eth0: Link UP Sep 12 17:29:52.115457 systemd-networkd[767]: eth0: Gained carrier Sep 12 17:29:52.115465 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:52.132861 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:29:52.157358 ignition[692]: parsing config with SHA512: 4cae2df4e7bbaba600d53d17b0df9618bbf83e04cfbbc3d625e9933033f9d14d5fdb99b80f79491a6b86ec821a5c345d1db01708727b6bb1570e33253bdce102 Sep 12 17:29:52.163590 unknown[692]: fetched base config from "system" Sep 12 17:29:52.163601 unknown[692]: fetched user config from "qemu" Sep 12 17:29:52.164082 ignition[692]: fetch-offline: fetch-offline passed Sep 12 17:29:52.164145 ignition[692]: Ignition finished successfully Sep 12 17:29:52.166762 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:29:52.169192 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:29:52.180049 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:29:52.192844 ignition[773]: Ignition 2.19.0 Sep 12 17:29:52.192855 ignition[773]: Stage: kargs Sep 12 17:29:52.193042 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:52.193053 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:52.194051 ignition[773]: kargs: kargs passed Sep 12 17:29:52.194107 ignition[773]: Ignition finished successfully Sep 12 17:29:52.198836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:29:52.211031 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:29:52.222409 ignition[781]: Ignition 2.19.0 Sep 12 17:29:52.222431 ignition[781]: Stage: disks Sep 12 17:29:52.222607 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:52.226178 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:29:52.222617 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:52.227551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:29:52.223655 ignition[781]: disks: disks passed Sep 12 17:29:52.229984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:29:52.223707 ignition[781]: Ignition finished successfully Sep 12 17:29:52.232122 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:29:52.234153 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:29:52.235889 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:29:52.249005 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:29:52.259983 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:29:52.265258 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:29:52.282969 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:29:52.332287 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:29:52.333971 kernel: EXT4-fs (vda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:29:52.333681 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:29:52.347946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:29:52.352970 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:29:52.354175 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:29:52.359877 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (799) Sep 12 17:29:52.354229 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:29:52.354256 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:29:52.369462 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:29:52.369486 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:52.369497 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:29:52.359976 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:29:52.369267 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:29:52.373650 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:29:52.375464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:29:52.411679 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:29:52.420016 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:29:52.424808 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:29:52.429401 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:29:52.515851 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:29:52.527008 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:29:52.530871 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:29:52.534822 kernel: BTRFS info (device vda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:29:52.565388 ignition[915]: INFO : Ignition 2.19.0 Sep 12 17:29:52.565388 ignition[915]: INFO : Stage: mount Sep 12 17:29:52.568598 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:52.568598 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:52.568598 ignition[915]: INFO : mount: mount passed Sep 12 17:29:52.568598 ignition[915]: INFO : Ignition finished successfully Sep 12 17:29:52.567346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:29:52.571642 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:29:52.594423 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:29:52.937714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:29:52.948108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:29:52.965565 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (928) Sep 12 17:29:52.967866 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:29:52.967893 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:52.967905 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:29:52.973825 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:29:52.975741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:29:53.004873 ignition[945]: INFO : Ignition 2.19.0 Sep 12 17:29:53.004873 ignition[945]: INFO : Stage: files Sep 12 17:29:53.007860 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:53.007860 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:53.007860 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:29:53.015231 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:29:53.015231 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:29:53.021808 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:29:53.026164 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:29:53.026164 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:29:53.025250 unknown[945]: wrote ssh authorized keys file for user: core Sep 12 17:29:53.033728 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:29:53.033728 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:29:53.033728 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:29:53.033728 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:29:53.076768 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:29:53.311008 systemd-networkd[767]: eth0: Gained IPv6LL Sep 12 17:29:53.465075 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:29:53.467424 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:29:53.484936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:29:53.484936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:29:53.484936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:29:53.484936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:29:53.484936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:29:53.484936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:29:53.819616 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:29:54.094733 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:29:54.094733 ignition[945]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 12 17:29:54.100101 ignition[945]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:29:54.132221 ignition[945]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:29:54.135830 ignition[945]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:29:54.138525 ignition[945]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:29:54.138525 ignition[945]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:29:54.138525 ignition[945]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:29:54.138525 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:29:54.138525 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:29:54.138525 ignition[945]: INFO : files: files passed Sep 12 17:29:54.138525 ignition[945]: INFO : Ignition finished successfully Sep 12 17:29:54.138947 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:29:54.151991 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:29:54.153946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:29:54.155926 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:29:54.157844 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:29:54.163671 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:29:54.167086 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:29:54.167086 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:29:54.172996 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:29:54.172321 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:29:54.176419 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:29:54.189032 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:29:54.213122 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:29:54.213232 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:29:54.215510 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:29:54.217505 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:29:54.219441 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:29:54.220330 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:29:54.237183 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:29:54.245049 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:29:54.253301 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:29:54.254611 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:29:54.256999 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:29:54.258808 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:29:54.258943 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:29:54.261961 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:29:54.264112 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:29:54.265795 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:29:54.267718 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:29:54.269731 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:29:54.272286 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:29:54.274182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:29:54.276192 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:29:54.278115 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:29:54.279904 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:29:54.281561 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:29:54.281783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:29:54.284226 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:29:54.286346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:29:54.288347 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:29:54.291898 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:29:54.293179 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:29:54.293306 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:29:54.296402 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:29:54.296530 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:29:54.298572 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:29:54.300166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:29:54.304880 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:29:54.306469 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:29:54.308657 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:29:54.310251 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:29:54.310343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:29:54.311909 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:29:54.311996 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:29:54.313578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:29:54.313698 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:29:54.315476 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:29:54.315588 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:29:54.327988 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:29:54.329618 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:29:54.330521 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:29:54.330647 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:29:54.332649 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:29:54.332766 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:29:54.338315 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:29:54.338413 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:29:54.344337 ignition[1000]: INFO : Ignition 2.19.0 Sep 12 17:29:54.344337 ignition[1000]: INFO : Stage: umount Sep 12 17:29:54.346887 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:54.346887 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:54.346887 ignition[1000]: INFO : umount: umount passed Sep 12 17:29:54.346887 ignition[1000]: INFO : Ignition finished successfully Sep 12 17:29:54.347631 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:29:54.347727 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:29:54.349388 systemd[1]: Stopped target network.target - Network. Sep 12 17:29:54.351079 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:29:54.351141 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:29:54.356528 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:29:54.356585 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:29:54.359149 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:29:54.359208 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:29:54.360776 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:29:54.360846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:29:54.362926 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:29:54.364631 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:29:54.367229 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:29:54.374981 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 12 17:29:54.376722 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:29:54.376870 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:29:54.379217 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:29:54.380874 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:29:54.382658 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:29:54.382723 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:29:54.391933 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:29:54.392814 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:29:54.392877 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:29:54.395251 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:29:54.395301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:29:54.397124 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:29:54.397172 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:29:54.399277 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:29:54.399324 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:29:54.401402 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:29:54.412893 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:29:54.413018 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:29:54.415785 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:29:54.415933 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:29:54.418576 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:29:54.418615 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:29:54.420594 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:29:54.420626 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:29:54.422655 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:29:54.422705 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:29:54.425458 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:29:54.425506 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:29:54.428151 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:29:54.428203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:29:54.446997 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:29:54.448055 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:29:54.448118 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:29:54.450273 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:29:54.450320 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:29:54.452372 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:29:54.452420 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:29:54.454642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:29:54.454691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:54.457011 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:29:54.457119 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:29:54.459123 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:29:54.459200 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:29:54.461566 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:29:54.463767 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:29:54.463893 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:29:54.471974 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:29:54.480450 systemd[1]: Switching root. Sep 12 17:29:54.512750 systemd-journald[238]: Journal stopped Sep 12 17:29:55.358595 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 12 17:29:55.358652 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:29:55.358665 kernel: SELinux: policy capability open_perms=1 Sep 12 17:29:55.358679 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:29:55.358689 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:29:55.358699 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:29:55.358710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:29:55.358726 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:29:55.358746 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:29:55.358757 kernel: audit: type=1403 audit(1757698194.721:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:29:55.358769 systemd[1]: Successfully loaded SELinux policy in 34.566ms. Sep 12 17:29:55.358787 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.727ms. Sep 12 17:29:55.358827 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:29:55.358840 systemd[1]: Detected virtualization kvm. Sep 12 17:29:55.358852 systemd[1]: Detected architecture arm64. Sep 12 17:29:55.358862 systemd[1]: Detected first boot. Sep 12 17:29:55.358875 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:29:55.358886 zram_generator::config[1062]: No configuration found. Sep 12 17:29:55.358898 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:29:55.358908 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:29:55.358918 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:29:55.358929 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:29:55.358940 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:29:55.358950 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:29:55.358963 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:29:55.358973 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:29:55.358984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:29:55.358999 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:29:55.359010 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:29:55.359024 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:29:55.359035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:29:55.359047 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:29:55.359058 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:29:55.359070 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:29:55.359081 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:29:55.359091 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:29:55.359102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:29:55.359112 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:29:55.359123 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:29:55.359133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:29:55.359144 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:29:55.359156 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:29:55.359167 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:29:55.359178 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:29:55.359188 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:29:55.359198 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:29:55.359209 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:29:55.359219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:29:55.359230 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:29:55.359240 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:29:55.359252 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:29:55.359263 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:29:55.359274 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:29:55.359284 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:29:55.359295 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:29:55.359305 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:29:55.359315 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:29:55.359326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:29:55.359337 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:29:55.359349 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:29:55.359360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:29:55.359370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:29:55.359381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:29:55.359391 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:29:55.359401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:29:55.359412 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:29:55.359423 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:29:55.359436 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:29:55.359446 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:29:55.359456 kernel: loop: module loaded Sep 12 17:29:55.359466 kernel: fuse: init (API version 7.39) Sep 12 17:29:55.359475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:29:55.359485 kernel: ACPI: bus type drm_connector registered Sep 12 17:29:55.359496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:29:55.359507 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:29:55.359517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:29:55.359530 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:29:55.359558 systemd-journald[1141]: Collecting audit messages is disabled. Sep 12 17:29:55.359581 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:29:55.359592 systemd-journald[1141]: Journal started Sep 12 17:29:55.359614 systemd-journald[1141]: Runtime Journal (/run/log/journal/9b3bced38dae49548736ca07f8b4b5c1) is 5.9M, max 47.3M, 41.4M free. Sep 12 17:29:55.362833 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:29:55.362978 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:29:55.364082 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:29:55.365326 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:29:55.366637 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:29:55.368106 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:29:55.369583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:29:55.371210 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:29:55.371384 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:29:55.372883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:29:55.373049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:29:55.374417 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:29:55.374581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:29:55.375950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:29:55.376115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:29:55.377581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:29:55.377754 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:29:55.379287 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:29:55.379505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:29:55.381457 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:29:55.383001 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:29:55.384605 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:29:55.396468 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:29:55.406928 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:29:55.409180 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:29:55.410330 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:29:55.415018 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:29:55.417469 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:29:55.418763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:29:55.420022 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:29:55.421163 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:29:55.424991 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:29:55.426903 systemd-journald[1141]: Time spent on flushing to /var/log/journal/9b3bced38dae49548736ca07f8b4b5c1 is 11.569ms for 845 entries. Sep 12 17:29:55.426903 systemd-journald[1141]: System Journal (/var/log/journal/9b3bced38dae49548736ca07f8b4b5c1) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:29:55.445559 systemd-journald[1141]: Received client request to flush runtime journal. Sep 12 17:29:55.427545 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:29:55.432322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:29:55.436212 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:29:55.437709 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:29:55.443871 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:29:55.448407 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:29:55.451980 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:29:55.454316 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:29:55.462278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:29:55.465492 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:29:55.469561 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 12 17:29:55.469580 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 12 17:29:55.474419 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:29:55.487058 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:29:55.508351 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:29:55.519008 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:29:55.533794 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 12 17:29:55.533827 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 12 17:29:55.537993 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:29:55.897743 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:29:55.911073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:29:55.932566 systemd-udevd[1226]: Using default interface naming scheme 'v255'. Sep 12 17:29:55.948771 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:29:55.962010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:29:55.969050 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:29:55.991072 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 12 17:29:56.018838 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1240) Sep 12 17:29:56.023511 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:29:56.049217 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:29:56.073778 systemd-networkd[1236]: lo: Link UP Sep 12 17:29:56.073786 systemd-networkd[1236]: lo: Gained carrier Sep 12 17:29:56.074470 systemd-networkd[1236]: Enumeration completed Sep 12 17:29:56.074609 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:29:56.074987 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:56.074994 systemd-networkd[1236]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:29:56.076827 systemd-networkd[1236]: eth0: Link UP Sep 12 17:29:56.076837 systemd-networkd[1236]: eth0: Gained carrier Sep 12 17:29:56.076849 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:56.092059 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:29:56.095117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:56.095868 systemd-networkd[1236]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:29:56.103533 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:29:56.106282 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:29:56.119924 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:29:56.135025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:56.147325 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:29:56.148865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:29:56.165178 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:29:56.169909 lvm[1272]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:29:56.200436 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:29:56.202043 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:29:56.203305 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:29:56.203336 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:29:56.204377 systemd[1]: Reached target machines.target - Containers. Sep 12 17:29:56.206423 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:29:56.221972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:29:56.224507 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:29:56.225738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:29:56.226766 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:29:56.228981 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:29:56.231473 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:29:56.233510 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:29:56.242234 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:29:56.247270 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 17:29:56.259845 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:29:56.261264 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:29:56.264333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:29:56.294851 kernel: loop1: detected capacity change from 0 to 203944 Sep 12 17:29:56.349838 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 17:29:56.387855 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 17:29:56.399865 kernel: loop4: detected capacity change from 0 to 203944 Sep 12 17:29:56.408832 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 17:29:56.414347 (sd-merge)[1295]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:29:56.414961 (sd-merge)[1295]: Merged extensions into '/usr'. Sep 12 17:29:56.419120 systemd[1]: Reloading requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:29:56.419143 systemd[1]: Reloading... Sep 12 17:29:56.475193 zram_generator::config[1320]: No configuration found. Sep 12 17:29:56.556714 ldconfig[1277]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:29:56.585605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:29:56.630385 systemd[1]: Reloading finished in 210 ms. Sep 12 17:29:56.645758 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:29:56.647279 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:29:56.665021 systemd[1]: Starting ensure-sysext.service... Sep 12 17:29:56.667097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:29:56.672195 systemd[1]: Reloading requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:29:56.672210 systemd[1]: Reloading... Sep 12 17:29:56.684571 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:29:56.684866 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:29:56.685508 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:29:56.685745 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Sep 12 17:29:56.685812 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Sep 12 17:29:56.688105 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:29:56.688121 systemd-tmpfiles[1365]: Skipping /boot Sep 12 17:29:56.695424 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:29:56.695441 systemd-tmpfiles[1365]: Skipping /boot Sep 12 17:29:56.727831 zram_generator::config[1394]: No configuration found. Sep 12 17:29:56.820324 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:29:56.870387 systemd[1]: Reloading finished in 197 ms. Sep 12 17:29:56.890442 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:29:56.923077 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:29:56.926040 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:29:56.927427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:29:56.929089 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:29:56.934154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:29:56.939828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:29:56.941436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:29:56.943564 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:29:56.950100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:29:56.952987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:29:56.955513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:29:56.955680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:29:56.957760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:29:56.958020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:29:56.960036 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:29:56.963061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:29:56.971795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:29:56.973026 augenrules[1466]: No rules Sep 12 17:29:56.977197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:29:56.982136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:29:56.987194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:29:56.989614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:29:56.991670 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:29:56.994206 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:29:56.996598 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:29:56.998626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:29:56.998884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:29:57.000876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:29:57.001030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:29:57.002942 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:29:57.003145 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:29:57.006257 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:29:57.015102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:29:57.023051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:29:57.025463 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:29:57.028058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:29:57.034160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:29:57.035494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:29:57.038594 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:29:57.039863 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:29:57.041373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:29:57.041958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:29:57.043747 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:29:57.043931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:29:57.045311 systemd-resolved[1454]: Positive Trust Anchors: Sep 12 17:29:57.045323 systemd-resolved[1454]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:29:57.045402 systemd-resolved[1454]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:29:57.045565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:29:57.045718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:29:57.047669 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:29:57.047952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:29:57.053034 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:29:57.055073 systemd-resolved[1454]: Defaulting to hostname 'linux'. Sep 12 17:29:57.055737 systemd[1]: Finished ensure-sysext.service. Sep 12 17:29:57.057504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:29:57.060021 systemd[1]: Reached target network.target - Network. Sep 12 17:29:57.060987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:29:57.062315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:29:57.062386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:29:57.082079 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:29:57.124652 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:29:57.126283 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:29:57.126345 systemd-timesyncd[1508]: Initial clock synchronization to Fri 2025-09-12 17:29:56.939287 UTC. Sep 12 17:29:57.126443 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:29:57.127669 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:29:57.129012 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:29:57.130285 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:29:57.131576 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:29:57.131620 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:29:57.132607 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:29:57.133861 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:29:57.135014 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:29:57.136277 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:29:57.138334 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:29:57.141067 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:29:57.143245 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:29:57.149011 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:29:57.150152 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:29:57.151196 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:29:57.152402 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:29:57.152460 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:29:57.152481 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:29:57.153834 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:29:57.156180 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:29:57.158989 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:29:57.164554 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:29:57.165744 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:29:57.167107 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:29:57.175488 jq[1514]: false Sep 12 17:29:57.176944 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:29:57.181334 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:29:57.186977 extend-filesystems[1515]: Found loop3 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found loop4 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found loop5 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda1 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda2 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda3 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found usr Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda4 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda6 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda7 Sep 12 17:29:57.189629 extend-filesystems[1515]: Found vda9 Sep 12 17:29:57.189629 extend-filesystems[1515]: Checking size of /dev/vda9 Sep 12 17:29:57.187037 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:29:57.214165 extend-filesystems[1515]: Resized partition /dev/vda9 Sep 12 17:29:57.197063 dbus-daemon[1513]: [system] SELinux support is enabled Sep 12 17:29:57.198627 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:29:57.217363 extend-filesystems[1538]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:29:57.229076 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1238) Sep 12 17:29:57.229171 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:29:57.201896 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:29:57.206017 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:29:57.209991 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:29:57.225011 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:29:57.233818 jq[1539]: true Sep 12 17:29:57.241207 update_engine[1536]: I20250912 17:29:57.240983 1536 main.cc:92] Flatcar Update Engine starting Sep 12 17:29:57.244762 update_engine[1536]: I20250912 17:29:57.244261 1536 update_check_scheduler.cc:74] Next update check in 6m4s Sep 12 17:29:57.244361 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:29:57.244676 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:29:57.244983 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:29:57.245188 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:29:57.250183 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:29:57.250430 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:29:57.255781 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:29:57.270033 (ntainerd)[1548]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:29:57.273628 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:29:57.280678 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:29:57.280678 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:29:57.280678 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:29:57.273858 systemd-logind[1530]: New seat seat0. Sep 12 17:29:57.288017 jq[1547]: true Sep 12 17:29:57.288265 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Sep 12 17:29:57.275016 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:29:57.283326 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:29:57.287979 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:29:57.288222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:29:57.296741 tar[1546]: linux-arm64/helm Sep 12 17:29:57.297859 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:29:57.298030 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:29:57.302125 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:29:57.302240 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:29:57.304164 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:29:57.305210 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:29:57.344218 bash[1577]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:29:57.347279 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:29:57.350739 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:29:57.357082 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:29:57.430613 containerd[1548]: time="2025-09-12T17:29:57.430169200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:29:57.455868 containerd[1548]: time="2025-09-12T17:29:57.455810240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:29:57.457364 containerd[1548]: time="2025-09-12T17:29:57.457314560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:29:57.457364 containerd[1548]: time="2025-09-12T17:29:57.457356240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:29:57.457450 containerd[1548]: time="2025-09-12T17:29:57.457374440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:29:57.457752 containerd[1548]: time="2025-09-12T17:29:57.457550760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:29:57.457752 containerd[1548]: time="2025-09-12T17:29:57.457578680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:29:57.457752 containerd[1548]: time="2025-09-12T17:29:57.457639920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:29:57.457752 containerd[1548]: time="2025-09-12T17:29:57.457653360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458397 containerd[1548]: time="2025-09-12T17:29:57.457893440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458397 containerd[1548]: time="2025-09-12T17:29:57.457924040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458397 containerd[1548]: time="2025-09-12T17:29:57.457938680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458397 containerd[1548]: time="2025-09-12T17:29:57.457950720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458397 containerd[1548]: time="2025-09-12T17:29:57.458029280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458397 containerd[1548]: time="2025-09-12T17:29:57.458303560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458562 containerd[1548]: time="2025-09-12T17:29:57.458447880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:29:57.458562 containerd[1548]: time="2025-09-12T17:29:57.458462440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:29:57.458562 containerd[1548]: time="2025-09-12T17:29:57.458542600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:29:57.458630 containerd[1548]: time="2025-09-12T17:29:57.458591000Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:29:57.465643 containerd[1548]: time="2025-09-12T17:29:57.465601040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:29:57.465643 containerd[1548]: time="2025-09-12T17:29:57.465661160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:29:57.465792 containerd[1548]: time="2025-09-12T17:29:57.465683280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:29:57.465792 containerd[1548]: time="2025-09-12T17:29:57.465708160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:29:57.465792 containerd[1548]: time="2025-09-12T17:29:57.465733000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:29:57.465942 containerd[1548]: time="2025-09-12T17:29:57.465913640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:29:57.466277 containerd[1548]: time="2025-09-12T17:29:57.466252960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:29:57.466392 containerd[1548]: time="2025-09-12T17:29:57.466372640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:29:57.466414 containerd[1548]: time="2025-09-12T17:29:57.466395920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:29:57.466414 containerd[1548]: time="2025-09-12T17:29:57.466410040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:29:57.466463 containerd[1548]: time="2025-09-12T17:29:57.466424440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466463 containerd[1548]: time="2025-09-12T17:29:57.466439000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466463 containerd[1548]: time="2025-09-12T17:29:57.466452400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466512 containerd[1548]: time="2025-09-12T17:29:57.466466680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466512 containerd[1548]: time="2025-09-12T17:29:57.466487280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466512 containerd[1548]: time="2025-09-12T17:29:57.466500440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466512 containerd[1548]: time="2025-09-12T17:29:57.466513600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466587 containerd[1548]: time="2025-09-12T17:29:57.466526080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:29:57.466587 containerd[1548]: time="2025-09-12T17:29:57.466548720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466587 containerd[1548]: time="2025-09-12T17:29:57.466562960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466587 containerd[1548]: time="2025-09-12T17:29:57.466575360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466657 containerd[1548]: time="2025-09-12T17:29:57.466587920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466657 containerd[1548]: time="2025-09-12T17:29:57.466606120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466657 containerd[1548]: time="2025-09-12T17:29:57.466621320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466657 containerd[1548]: time="2025-09-12T17:29:57.466634240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466657 containerd[1548]: time="2025-09-12T17:29:57.466649200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466758 containerd[1548]: time="2025-09-12T17:29:57.466661800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466758 containerd[1548]: time="2025-09-12T17:29:57.466676000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466758 containerd[1548]: time="2025-09-12T17:29:57.466691720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466758 containerd[1548]: time="2025-09-12T17:29:57.466704960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466758 containerd[1548]: time="2025-09-12T17:29:57.466717120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.466758 containerd[1548]: time="2025-09-12T17:29:57.466750560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.466772480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.466786280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.466814840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.466961960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.466981880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.466994320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.467005960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.467014880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.467026960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.467037120Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:29:57.467188 containerd[1548]: time="2025-09-12T17:29:57.467047760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:29:57.467488 containerd[1548]: time="2025-09-12T17:29:57.467424760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:29:57.467605 containerd[1548]: time="2025-09-12T17:29:57.467493200Z" level=info msg="Connect containerd service" Sep 12 17:29:57.467605 containerd[1548]: time="2025-09-12T17:29:57.467522080Z" level=info msg="using legacy CRI server" Sep 12 17:29:57.467605 containerd[1548]: time="2025-09-12T17:29:57.467528920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:29:57.467713 containerd[1548]: time="2025-09-12T17:29:57.467622000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:29:57.469128 containerd[1548]: time="2025-09-12T17:29:57.468606320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:29:57.469128 containerd[1548]: time="2025-09-12T17:29:57.469055160Z" level=info msg="Start subscribing containerd event" Sep 12 17:29:57.469128 containerd[1548]: time="2025-09-12T17:29:57.469133720Z" level=info msg="Start recovering state" Sep 12 17:29:57.469248 containerd[1548]: time="2025-09-12T17:29:57.469183280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:29:57.469248 containerd[1548]: time="2025-09-12T17:29:57.469217080Z" level=info msg="Start event monitor" Sep 12 17:29:57.469248 containerd[1548]: time="2025-09-12T17:29:57.469236040Z" level=info msg="Start snapshots syncer" Sep 12 17:29:57.469299 containerd[1548]: time="2025-09-12T17:29:57.469249080Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:29:57.469299 containerd[1548]: time="2025-09-12T17:29:57.469255880Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:29:57.469299 containerd[1548]: time="2025-09-12T17:29:57.469259080Z" level=info msg="Start streaming server" Sep 12 17:29:57.469460 containerd[1548]: time="2025-09-12T17:29:57.469419760Z" level=info msg="containerd successfully booted in 0.043284s" Sep 12 17:29:57.469523 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:29:57.637109 tar[1546]: linux-arm64/LICENSE Sep 12 17:29:57.637312 tar[1546]: linux-arm64/README.md Sep 12 17:29:57.650690 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:29:57.654978 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:29:57.676018 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:29:57.692083 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:29:57.698252 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:29:57.698508 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:29:57.701546 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:29:57.715111 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:29:57.725179 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:29:57.728013 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:29:57.729470 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:29:58.047015 systemd-networkd[1236]: eth0: Gained IPv6LL Sep 12 17:29:58.050226 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:29:58.052047 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:29:58.067189 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:29:58.079083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:29:58.081646 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:29:58.104115 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:29:58.104357 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:29:58.106421 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:29:58.117344 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:29:58.719050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:29:58.720752 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:29:58.727952 systemd[1]: Startup finished in 5.631s (kernel) + 4.040s (userspace) = 9.672s. Sep 12 17:29:58.729249 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:29:59.149363 kubelet[1651]: E0912 17:29:59.149249 1651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:29:59.151871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:29:59.152069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:30:02.828077 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:30:02.836042 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:45708.service - OpenSSH per-connection server daemon (10.0.0.1:45708). Sep 12 17:30:02.878759 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 45708 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:30:02.880736 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:02.892702 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:30:02.899065 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:30:02.901024 systemd-logind[1530]: New session 1 of user core. Sep 12 17:30:02.909578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:30:02.918189 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:30:02.921715 (systemd)[1669]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:30:03.006311 systemd[1669]: Queued start job for default target default.target. Sep 12 17:30:03.006960 systemd[1669]: Created slice app.slice - User Application Slice. Sep 12 17:30:03.006987 systemd[1669]: Reached target paths.target - Paths. Sep 12 17:30:03.006999 systemd[1669]: Reached target timers.target - Timers. Sep 12 17:30:03.017940 systemd[1669]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:30:03.024242 systemd[1669]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:30:03.024303 systemd[1669]: Reached target sockets.target - Sockets. Sep 12 17:30:03.024315 systemd[1669]: Reached target basic.target - Basic System. Sep 12 17:30:03.024354 systemd[1669]: Reached target default.target - Main User Target. Sep 12 17:30:03.024379 systemd[1669]: Startup finished in 96ms. Sep 12 17:30:03.024810 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:30:03.026968 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:30:03.083163 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:45718.service - OpenSSH per-connection server daemon (10.0.0.1:45718). Sep 12 17:30:03.120425 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 45718 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:30:03.122491 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:03.131244 systemd-logind[1530]: New session 2 of user core. Sep 12 17:30:03.141123 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:30:03.203665 sshd[1682]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:03.217077 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:45734.service - OpenSSH per-connection server daemon (10.0.0.1:45734). Sep 12 17:30:03.217464 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:45718.service: Deactivated successfully. Sep 12 17:30:03.220120 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:30:03.222216 systemd-logind[1530]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:30:03.224060 systemd-logind[1530]: Removed session 2. Sep 12 17:30:03.249236 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 45734 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:30:03.252249 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:03.256849 systemd-logind[1530]: New session 3 of user core. Sep 12 17:30:03.268136 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:30:03.319373 sshd[1687]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:03.328066 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:45742.service - OpenSSH per-connection server daemon (10.0.0.1:45742). Sep 12 17:30:03.328458 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:45734.service: Deactivated successfully. Sep 12 17:30:03.330244 systemd-logind[1530]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:30:03.330911 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:30:03.331849 systemd-logind[1530]: Removed session 3. Sep 12 17:30:03.369536 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 45742 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:30:03.370921 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:03.375108 systemd-logind[1530]: New session 4 of user core. Sep 12 17:30:03.387156 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:30:03.442993 sshd[1695]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:03.457131 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:45752.service - OpenSSH per-connection server daemon (10.0.0.1:45752). Sep 12 17:30:03.457510 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:45742.service: Deactivated successfully. Sep 12 17:30:03.459866 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:30:03.460481 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:30:03.461553 systemd-logind[1530]: Removed session 4. Sep 12 17:30:03.492057 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 45752 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:30:03.493435 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:03.497776 systemd-logind[1530]: New session 5 of user core. Sep 12 17:30:03.507101 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:30:03.563055 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:30:03.563333 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:30:03.577590 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 12 17:30:03.579606 sshd[1703]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:03.591065 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:45756.service - OpenSSH per-connection server daemon (10.0.0.1:45756). Sep 12 17:30:03.591463 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:45752.service: Deactivated successfully. Sep 12 17:30:03.593902 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:30:03.594368 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:30:03.595308 systemd-logind[1530]: Removed session 5. Sep 12 17:30:03.622378 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 45756 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:30:03.623896 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:03.628597 systemd-logind[1530]: New session 6 of user core. Sep 12 17:30:03.641138 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:30:03.692913 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:30:03.693219 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:30:03.696185 sudo[1720]: pam_unix(sudo:session): session closed for user root Sep 12 17:30:03.700844 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:30:03.701136 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:30:03.719236 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:30:03.720586 auditctl[1723]: No rules Sep 12 17:30:03.721435 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:30:03.721698 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:30:03.723705 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:30:03.747256 augenrules[1742]: No rules Sep 12 17:30:03.748678 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:30:03.750918 sudo[1719]: pam_unix(sudo:session): session closed for user root Sep 12 17:30:03.752718 sshd[1712]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:03.763086 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:45766.service - OpenSSH per-connection server daemon (10.0.0.1:45766). Sep 12 17:30:03.763494 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:45756.service: Deactivated successfully. Sep 12 17:30:03.766008 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:30:03.766620 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:30:03.767673 systemd-logind[1530]: Removed session 6. Sep 12 17:30:03.796483 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 45766 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:30:03.797782 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:03.801441 systemd-logind[1530]: New session 7 of user core. Sep 12 17:30:03.813070 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:30:03.864833 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:30:03.865121 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:30:04.131081 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:30:04.131337 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:30:04.351846 dockerd[1773]: time="2025-09-12T17:30:04.351641414Z" level=info msg="Starting up" Sep 12 17:30:04.427593 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4041095995-merged.mount: Deactivated successfully. Sep 12 17:30:04.730069 dockerd[1773]: time="2025-09-12T17:30:04.729901800Z" level=info msg="Loading containers: start." Sep 12 17:30:04.831529 kernel: Initializing XFRM netlink socket Sep 12 17:30:04.902381 systemd-networkd[1236]: docker0: Link UP Sep 12 17:30:04.923344 dockerd[1773]: time="2025-09-12T17:30:04.923296149Z" level=info msg="Loading containers: done." Sep 12 17:30:04.935338 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2104930098-merged.mount: Deactivated successfully. Sep 12 17:30:04.935854 dockerd[1773]: time="2025-09-12T17:30:04.935811220Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:30:04.935957 dockerd[1773]: time="2025-09-12T17:30:04.935927547Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:30:04.936079 dockerd[1773]: time="2025-09-12T17:30:04.936062438Z" level=info msg="Daemon has completed initialization" Sep 12 17:30:04.967505 dockerd[1773]: time="2025-09-12T17:30:04.967370893Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:30:04.967638 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:30:05.470943 containerd[1548]: time="2025-09-12T17:30:05.470902132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:30:06.350862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687747634.mount: Deactivated successfully. Sep 12 17:30:07.593543 containerd[1548]: time="2025-09-12T17:30:07.593493478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:07.594635 containerd[1548]: time="2025-09-12T17:30:07.593959700Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 12 17:30:07.595911 containerd[1548]: time="2025-09-12T17:30:07.595878050Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:07.598778 containerd[1548]: time="2025-09-12T17:30:07.598728566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:07.599926 containerd[1548]: time="2025-09-12T17:30:07.599881053Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.128932607s" Sep 12 17:30:07.600006 containerd[1548]: time="2025-09-12T17:30:07.599933522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:30:07.601340 containerd[1548]: time="2025-09-12T17:30:07.601293025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:30:08.867055 containerd[1548]: time="2025-09-12T17:30:08.866987556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:08.868895 containerd[1548]: time="2025-09-12T17:30:08.868838311Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 12 17:30:08.870668 containerd[1548]: time="2025-09-12T17:30:08.870637544Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:08.874071 containerd[1548]: time="2025-09-12T17:30:08.874017920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:08.875115 containerd[1548]: time="2025-09-12T17:30:08.875078124Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.273750734s" Sep 12 17:30:08.875115 containerd[1548]: time="2025-09-12T17:30:08.875112630Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:30:08.875910 containerd[1548]: time="2025-09-12T17:30:08.875882151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:30:09.402632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:30:09.411985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:09.538346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:09.542345 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:30:09.666049 kubelet[1994]: E0912 17:30:09.665522 1994 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:30:09.669197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:30:09.669377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:30:10.003349 containerd[1548]: time="2025-09-12T17:30:10.003215073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:10.004784 containerd[1548]: time="2025-09-12T17:30:10.004750653Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 12 17:30:10.005861 containerd[1548]: time="2025-09-12T17:30:10.005837674Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:10.010323 containerd[1548]: time="2025-09-12T17:30:10.010290940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:10.011751 containerd[1548]: time="2025-09-12T17:30:10.011377084Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.13545731s" Sep 12 17:30:10.011751 containerd[1548]: time="2025-09-12T17:30:10.011419285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:30:10.012086 containerd[1548]: time="2025-09-12T17:30:10.012063917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:30:10.955733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273666448.mount: Deactivated successfully. Sep 12 17:30:11.307574 containerd[1548]: time="2025-09-12T17:30:11.307461278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:11.308551 containerd[1548]: time="2025-09-12T17:30:11.308392229Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 12 17:30:11.309210 containerd[1548]: time="2025-09-12T17:30:11.309184234Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:11.311437 containerd[1548]: time="2025-09-12T17:30:11.311404017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:11.312134 containerd[1548]: time="2025-09-12T17:30:11.312108425Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.300012095s" Sep 12 17:30:11.312175 containerd[1548]: time="2025-09-12T17:30:11.312141249Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:30:11.312721 containerd[1548]: time="2025-09-12T17:30:11.312698824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:30:11.834024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367065454.mount: Deactivated successfully. Sep 12 17:30:12.400372 containerd[1548]: time="2025-09-12T17:30:12.400327267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:12.400947 containerd[1548]: time="2025-09-12T17:30:12.400911235Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 17:30:12.401613 containerd[1548]: time="2025-09-12T17:30:12.401580017Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:12.404517 containerd[1548]: time="2025-09-12T17:30:12.404485352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:12.406860 containerd[1548]: time="2025-09-12T17:30:12.406827005Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.094095033s" Sep 12 17:30:12.406925 containerd[1548]: time="2025-09-12T17:30:12.406861600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:30:12.407381 containerd[1548]: time="2025-09-12T17:30:12.407358563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:30:12.823206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464308742.mount: Deactivated successfully. Sep 12 17:30:12.827817 containerd[1548]: time="2025-09-12T17:30:12.827508212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:12.828213 containerd[1548]: time="2025-09-12T17:30:12.828017969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:30:12.828917 containerd[1548]: time="2025-09-12T17:30:12.828881327Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:12.831823 containerd[1548]: time="2025-09-12T17:30:12.831786463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:12.833415 containerd[1548]: time="2025-09-12T17:30:12.833382094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 425.986543ms" Sep 12 17:30:12.833464 containerd[1548]: time="2025-09-12T17:30:12.833416131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:30:12.834033 containerd[1548]: time="2025-09-12T17:30:12.834000179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:30:13.315011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065563984.mount: Deactivated successfully. Sep 12 17:30:14.644198 containerd[1548]: time="2025-09-12T17:30:14.644149052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:14.645303 containerd[1548]: time="2025-09-12T17:30:14.644964078Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 12 17:30:14.646538 containerd[1548]: time="2025-09-12T17:30:14.646077837Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:14.649766 containerd[1548]: time="2025-09-12T17:30:14.649202435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:14.650594 containerd[1548]: time="2025-09-12T17:30:14.650553418Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.81650931s" Sep 12 17:30:14.650594 containerd[1548]: time="2025-09-12T17:30:14.650592351Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:30:19.919654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:30:19.929022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:20.041827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:20.046354 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:30:20.082813 kubelet[2160]: E0912 17:30:20.082729 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:30:20.085477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:30:20.085837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:30:20.844429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:20.860514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:20.890455 systemd[1]: Reloading requested from client PID 2178 ('systemctl') (unit session-7.scope)... Sep 12 17:30:20.890470 systemd[1]: Reloading... Sep 12 17:30:20.960852 zram_generator::config[2217]: No configuration found. Sep 12 17:30:21.125820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:30:21.179833 systemd[1]: Reloading finished in 289 ms. Sep 12 17:30:21.215003 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:30:21.215070 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:30:21.215360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:21.217264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:21.320869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:21.326204 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:30:21.371739 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:21.371739 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:30:21.371739 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:21.372138 kubelet[2274]: I0912 17:30:21.371791 2274 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:30:22.332028 kubelet[2274]: I0912 17:30:22.331985 2274 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:30:22.332028 kubelet[2274]: I0912 17:30:22.332023 2274 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:30:22.332302 kubelet[2274]: I0912 17:30:22.332283 2274 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:30:22.352282 kubelet[2274]: E0912 17:30:22.352233 2274 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:22.352544 kubelet[2274]: I0912 17:30:22.352520 2274 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:30:22.359338 kubelet[2274]: E0912 17:30:22.359226 2274 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:30:22.359338 kubelet[2274]: I0912 17:30:22.359255 2274 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:30:22.363363 kubelet[2274]: I0912 17:30:22.363267 2274 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:30:22.364858 kubelet[2274]: I0912 17:30:22.364390 2274 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:30:22.364858 kubelet[2274]: I0912 17:30:22.364551 2274 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:30:22.364858 kubelet[2274]: I0912 17:30:22.364580 2274 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:30:22.364858 kubelet[2274]: I0912 17:30:22.364839 2274 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:30:22.365095 kubelet[2274]: I0912 17:30:22.364849 2274 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:30:22.365095 kubelet[2274]: I0912 17:30:22.365092 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:22.367532 kubelet[2274]: I0912 17:30:22.367499 2274 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:30:22.367532 kubelet[2274]: I0912 17:30:22.367531 2274 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:30:22.367611 kubelet[2274]: I0912 17:30:22.367551 2274 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:30:22.367635 kubelet[2274]: I0912 17:30:22.367625 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:30:22.370533 kubelet[2274]: W0912 17:30:22.370330 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 12 17:30:22.370533 kubelet[2274]: E0912 17:30:22.370403 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:22.371389 kubelet[2274]: W0912 17:30:22.371274 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 12 17:30:22.371389 kubelet[2274]: E0912 17:30:22.371341 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:22.372146 kubelet[2274]: I0912 17:30:22.371928 2274 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:30:22.373049 kubelet[2274]: I0912 17:30:22.372707 2274 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:30:22.373049 kubelet[2274]: W0912 17:30:22.372893 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:30:22.374205 kubelet[2274]: I0912 17:30:22.374175 2274 server.go:1274] "Started kubelet" Sep 12 17:30:22.374835 kubelet[2274]: I0912 17:30:22.374462 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:30:22.375356 kubelet[2274]: I0912 17:30:22.375076 2274 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:30:22.375356 kubelet[2274]: I0912 17:30:22.374458 2274 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:30:22.376027 kubelet[2274]: I0912 17:30:22.375995 2274 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:30:22.376554 kubelet[2274]: I0912 17:30:22.376531 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:30:22.376977 kubelet[2274]: I0912 17:30:22.376946 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:30:22.378512 kubelet[2274]: E0912 17:30:22.377272 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864993876f246df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:30:22.374143711 +0000 UTC m=+1.043899853,LastTimestamp:2025-09-12 17:30:22.374143711 +0000 UTC m=+1.043899853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:30:22.378839 kubelet[2274]: I0912 17:30:22.378679 2274 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:30:22.378839 kubelet[2274]: I0912 17:30:22.378827 2274 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:30:22.378911 kubelet[2274]: I0912 17:30:22.378895 2274 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:30:22.379633 kubelet[2274]: W0912 17:30:22.379303 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 12 17:30:22.379633 kubelet[2274]: E0912 17:30:22.379363 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:22.379633 kubelet[2274]: E0912 17:30:22.379483 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:22.380518 kubelet[2274]: I0912 17:30:22.380488 2274 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:30:22.380518 kubelet[2274]: I0912 17:30:22.380509 2274 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:30:22.380598 kubelet[2274]: I0912 17:30:22.380576 2274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:30:22.380860 kubelet[2274]: E0912 17:30:22.380827 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Sep 12 17:30:22.383213 kubelet[2274]: E0912 17:30:22.383187 2274 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:30:22.391879 kubelet[2274]: I0912 17:30:22.391841 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:30:22.393020 kubelet[2274]: I0912 17:30:22.392998 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:30:22.393131 kubelet[2274]: I0912 17:30:22.393121 2274 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:30:22.393880 kubelet[2274]: I0912 17:30:22.393204 2274 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:30:22.393880 kubelet[2274]: E0912 17:30:22.393255 2274 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:30:22.394410 kubelet[2274]: W0912 17:30:22.394364 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 12 17:30:22.394864 kubelet[2274]: E0912 17:30:22.394838 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:22.404587 kubelet[2274]: I0912 17:30:22.404557 2274 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:30:22.404587 kubelet[2274]: I0912 17:30:22.404576 2274 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:30:22.404587 kubelet[2274]: I0912 17:30:22.404593 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:22.476098 kubelet[2274]: I0912 17:30:22.476059 2274 policy_none.go:49] "None policy: Start" Sep 12 17:30:22.476887 kubelet[2274]: I0912 17:30:22.476856 2274 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:30:22.476887 kubelet[2274]: I0912 17:30:22.476883 2274 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:30:22.479564 kubelet[2274]: E0912 17:30:22.479530 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:22.481720 kubelet[2274]: I0912 17:30:22.480916 2274 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:30:22.481720 kubelet[2274]: I0912 17:30:22.481116 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:30:22.481720 kubelet[2274]: I0912 17:30:22.481128 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:30:22.481862 kubelet[2274]: I0912 17:30:22.481742 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:30:22.483049 kubelet[2274]: E0912 17:30:22.483024 2274 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:30:22.580298 kubelet[2274]: I0912 17:30:22.580246 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:22.580298 kubelet[2274]: I0912 17:30:22.580287 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:22.580298 kubelet[2274]: I0912 17:30:22.580311 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:22.580482 kubelet[2274]: I0912 17:30:22.580327 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:22.580482 kubelet[2274]: I0912 17:30:22.580343 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:22.580482 kubelet[2274]: I0912 17:30:22.580360 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:22.580482 kubelet[2274]: I0912 17:30:22.580374 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:22.580482 kubelet[2274]: I0912 17:30:22.580392 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:22.580584 kubelet[2274]: I0912 17:30:22.580407 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:22.581643 kubelet[2274]: E0912 17:30:22.581583 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Sep 12 17:30:22.582498 kubelet[2274]: I0912 17:30:22.582422 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:30:22.583111 kubelet[2274]: E0912 17:30:22.583073 2274 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 12 17:30:22.785122 kubelet[2274]: I0912 17:30:22.785058 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:30:22.785487 kubelet[2274]: E0912 17:30:22.785444 2274 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 12 17:30:22.800769 kubelet[2274]: E0912 17:30:22.800740 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:22.800845 kubelet[2274]: E0912 17:30:22.800782 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:22.801645 containerd[1548]: time="2025-09-12T17:30:22.801360897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:424aaebaf19afe11fac880b43002720b,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:22.801645 containerd[1548]: time="2025-09-12T17:30:22.801395744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:22.803041 kubelet[2274]: E0912 17:30:22.803009 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:22.803744 containerd[1548]: time="2025-09-12T17:30:22.803631902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:22.982921 kubelet[2274]: E0912 17:30:22.982797 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Sep 12 17:30:23.186784 kubelet[2274]: I0912 17:30:23.186731 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:30:23.187121 kubelet[2274]: E0912 17:30:23.187079 2274 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 12 17:30:23.278669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812167079.mount: Deactivated successfully. Sep 12 17:30:23.283241 containerd[1548]: time="2025-09-12T17:30:23.283185033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:23.286666 containerd[1548]: time="2025-09-12T17:30:23.286628055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 12 17:30:23.287747 containerd[1548]: time="2025-09-12T17:30:23.287702283Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:23.290135 containerd[1548]: time="2025-09-12T17:30:23.289906214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:30:23.290858 containerd[1548]: time="2025-09-12T17:30:23.290777771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:30:23.292854 containerd[1548]: time="2025-09-12T17:30:23.291829937Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:23.294261 containerd[1548]: time="2025-09-12T17:30:23.294194495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:23.295134 containerd[1548]: time="2025-09-12T17:30:23.295097305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.658681ms" Sep 12 17:30:23.296488 containerd[1548]: time="2025-09-12T17:30:23.296433716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.723647ms" Sep 12 17:30:23.296930 containerd[1548]: time="2025-09-12T17:30:23.296886900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:23.299771 containerd[1548]: time="2025-09-12T17:30:23.299716392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.239044ms" Sep 12 17:30:23.413138 kubelet[2274]: W0912 17:30:23.413056 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 12 17:30:23.413514 kubelet[2274]: E0912 17:30:23.413143 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:23.417993 containerd[1548]: time="2025-09-12T17:30:23.417612338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:30:23.417993 containerd[1548]: time="2025-09-12T17:30:23.417836432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:30:23.417993 containerd[1548]: time="2025-09-12T17:30:23.417871723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:23.418368 containerd[1548]: time="2025-09-12T17:30:23.418321389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:23.418425 containerd[1548]: time="2025-09-12T17:30:23.418343411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:30:23.418425 containerd[1548]: time="2025-09-12T17:30:23.418387694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:30:23.418425 containerd[1548]: time="2025-09-12T17:30:23.418406838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:23.418550 containerd[1548]: time="2025-09-12T17:30:23.418484014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:23.419882 containerd[1548]: time="2025-09-12T17:30:23.419734936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:30:23.419882 containerd[1548]: time="2025-09-12T17:30:23.419785974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:30:23.419882 containerd[1548]: time="2025-09-12T17:30:23.419812751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:23.419997 containerd[1548]: time="2025-09-12T17:30:23.419895962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:23.459464 kubelet[2274]: W0912 17:30:23.459402 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 12 17:30:23.459881 kubelet[2274]: E0912 17:30:23.459470 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:23.474220 containerd[1548]: time="2025-09-12T17:30:23.474174871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:424aaebaf19afe11fac880b43002720b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e3d51f3b0814e5c352ab7b9392788ad2b525c2366db8efbf6a90d02edfcee53\"" Sep 12 17:30:23.475437 kubelet[2274]: E0912 17:30:23.475408 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:23.477359 containerd[1548]: time="2025-09-12T17:30:23.477299278Z" level=info msg="CreateContainer within sandbox \"8e3d51f3b0814e5c352ab7b9392788ad2b525c2366db8efbf6a90d02edfcee53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:30:23.480396 containerd[1548]: time="2025-09-12T17:30:23.480345309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"78ce5192ab75eea94ee16be610a58c389db73a034be6f0e703985f62ad70a6de\"" Sep 12 17:30:23.482109 kubelet[2274]: E0912 17:30:23.481908 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:23.483943 containerd[1548]: time="2025-09-12T17:30:23.483875619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e39ee9ca9afcbbf0bee15dd68fd1e7edd90926dbc3ecd776f4eb73a8390e692c\"" Sep 12 17:30:23.484770 kubelet[2274]: E0912 17:30:23.484739 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:23.485131 containerd[1548]: time="2025-09-12T17:30:23.484984179Z" level=info msg="CreateContainer within sandbox \"78ce5192ab75eea94ee16be610a58c389db73a034be6f0e703985f62ad70a6de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:30:23.486680 containerd[1548]: time="2025-09-12T17:30:23.486638686Z" level=info msg="CreateContainer within sandbox \"e39ee9ca9afcbbf0bee15dd68fd1e7edd90926dbc3ecd776f4eb73a8390e692c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:30:23.495885 containerd[1548]: time="2025-09-12T17:30:23.495782936Z" level=info msg="CreateContainer within sandbox \"8e3d51f3b0814e5c352ab7b9392788ad2b525c2366db8efbf6a90d02edfcee53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"19698e63d22c684bf6f6ce4cd9f318bdf000bca6601949227503fdd033af4e4a\"" Sep 12 17:30:23.497087 containerd[1548]: time="2025-09-12T17:30:23.497027583Z" level=info msg="StartContainer for \"19698e63d22c684bf6f6ce4cd9f318bdf000bca6601949227503fdd033af4e4a\"" Sep 12 17:30:23.504446 containerd[1548]: time="2025-09-12T17:30:23.504388274Z" level=info msg="CreateContainer within sandbox \"78ce5192ab75eea94ee16be610a58c389db73a034be6f0e703985f62ad70a6de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9039a64fd2da31dae4c3dbcfd6c3c124104fb26c0bc862adf523b58ae99fd806\"" Sep 12 17:30:23.505104 containerd[1548]: time="2025-09-12T17:30:23.505003683Z" level=info msg="StartContainer for \"9039a64fd2da31dae4c3dbcfd6c3c124104fb26c0bc862adf523b58ae99fd806\"" Sep 12 17:30:23.510344 containerd[1548]: time="2025-09-12T17:30:23.510293892Z" level=info msg="CreateContainer within sandbox \"e39ee9ca9afcbbf0bee15dd68fd1e7edd90926dbc3ecd776f4eb73a8390e692c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57969686f43069e430abe2b8e7bf34e672e3c549985e86041804e4a0c201daa4\"" Sep 12 17:30:23.511645 containerd[1548]: time="2025-09-12T17:30:23.511607402Z" level=info msg="StartContainer for \"57969686f43069e430abe2b8e7bf34e672e3c549985e86041804e4a0c201daa4\"" Sep 12 17:30:23.545456 kubelet[2274]: W0912 17:30:23.545109 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 12 17:30:23.545456 kubelet[2274]: E0912 17:30:23.545179 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:23.570312 containerd[1548]: time="2025-09-12T17:30:23.570093458Z" level=info msg="StartContainer for \"19698e63d22c684bf6f6ce4cd9f318bdf000bca6601949227503fdd033af4e4a\" returns successfully" Sep 12 17:30:23.575175 containerd[1548]: time="2025-09-12T17:30:23.575109735Z" level=info msg="StartContainer for \"9039a64fd2da31dae4c3dbcfd6c3c124104fb26c0bc862adf523b58ae99fd806\" returns successfully" Sep 12 17:30:23.585994 containerd[1548]: time="2025-09-12T17:30:23.584241156Z" level=info msg="StartContainer for \"57969686f43069e430abe2b8e7bf34e672e3c549985e86041804e4a0c201daa4\" returns successfully" Sep 12 17:30:23.989482 kubelet[2274]: I0912 17:30:23.989379 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:30:24.410756 kubelet[2274]: E0912 17:30:24.410621 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:24.415220 kubelet[2274]: E0912 17:30:24.415194 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:24.420837 kubelet[2274]: E0912 17:30:24.420140 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:25.432190 kubelet[2274]: E0912 17:30:25.430588 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:30:25.438102 kubelet[2274]: E0912 17:30:25.434820 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:25.498946 kubelet[2274]: I0912 17:30:25.498832 2274 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:30:25.498946 kubelet[2274]: E0912 17:30:25.498869 2274 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:30:25.510081 kubelet[2274]: E0912 17:30:25.510051 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:25.610999 kubelet[2274]: E0912 17:30:25.610890 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:25.711462 kubelet[2274]: E0912 17:30:25.711326 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:25.812227 kubelet[2274]: E0912 17:30:25.812124 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:25.913309 kubelet[2274]: E0912 17:30:25.913243 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:26.013534 kubelet[2274]: E0912 17:30:26.013387 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:26.114322 kubelet[2274]: E0912 17:30:26.114275 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:26.215047 kubelet[2274]: E0912 17:30:26.214957 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:26.316035 kubelet[2274]: E0912 17:30:26.315878 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:27.259317 kubelet[2274]: E0912 17:30:27.258522 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:27.371213 kubelet[2274]: I0912 17:30:27.371179 2274 apiserver.go:52] "Watching apiserver" Sep 12 17:30:27.379560 kubelet[2274]: I0912 17:30:27.379475 2274 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:30:27.436860 kubelet[2274]: E0912 17:30:27.436767 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:27.764611 systemd[1]: Reloading requested from client PID 2550 ('systemctl') (unit session-7.scope)... Sep 12 17:30:27.764629 systemd[1]: Reloading... Sep 12 17:30:27.827846 zram_generator::config[2589]: No configuration found. Sep 12 17:30:27.937501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:30:28.001605 systemd[1]: Reloading finished in 236 ms. Sep 12 17:30:28.026137 kubelet[2274]: I0912 17:30:28.026017 2274 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:30:28.028304 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:28.045060 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:30:28.045398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:28.056066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:28.157891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:28.163891 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:30:28.219613 kubelet[2641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:28.219613 kubelet[2641]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:30:28.219613 kubelet[2641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:28.220065 kubelet[2641]: I0912 17:30:28.219735 2641 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:30:28.226183 kubelet[2641]: I0912 17:30:28.226137 2641 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:30:28.226183 kubelet[2641]: I0912 17:30:28.226169 2641 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:30:28.226388 kubelet[2641]: I0912 17:30:28.226374 2641 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:30:28.229439 kubelet[2641]: I0912 17:30:28.228886 2641 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:30:28.231940 kubelet[2641]: I0912 17:30:28.231911 2641 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:30:28.234790 kubelet[2641]: E0912 17:30:28.234755 2641 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:30:28.234933 kubelet[2641]: I0912 17:30:28.234920 2641 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:30:28.237395 kubelet[2641]: I0912 17:30:28.237365 2641 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:30:28.237738 kubelet[2641]: I0912 17:30:28.237720 2641 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:30:28.237885 kubelet[2641]: I0912 17:30:28.237853 2641 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:30:28.238058 kubelet[2641]: I0912 17:30:28.237886 2641 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:30:28.238140 kubelet[2641]: I0912 17:30:28.238067 2641 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:30:28.238140 kubelet[2641]: I0912 17:30:28.238077 2641 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:30:28.238140 kubelet[2641]: I0912 17:30:28.238117 2641 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:28.238223 kubelet[2641]: I0912 17:30:28.238209 2641 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:30:28.238246 kubelet[2641]: I0912 17:30:28.238223 2641 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:30:28.238246 kubelet[2641]: I0912 17:30:28.238242 2641 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:30:28.238286 kubelet[2641]: I0912 17:30:28.238255 2641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:30:28.244006 kubelet[2641]: I0912 17:30:28.243270 2641 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:30:28.246476 kubelet[2641]: I0912 17:30:28.245673 2641 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:30:28.247120 kubelet[2641]: I0912 17:30:28.247091 2641 server.go:1274] "Started kubelet" Sep 12 17:30:28.249001 kubelet[2641]: I0912 17:30:28.248945 2641 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:30:28.252847 kubelet[2641]: I0912 17:30:28.250738 2641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:30:28.253551 kubelet[2641]: I0912 17:30:28.253517 2641 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:30:28.254967 kubelet[2641]: I0912 17:30:28.252312 2641 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:30:28.255224 kubelet[2641]: I0912 17:30:28.255193 2641 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:30:28.255254 kubelet[2641]: I0912 17:30:28.253907 2641 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:30:28.255388 kubelet[2641]: I0912 17:30:28.255366 2641 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:30:28.255388 kubelet[2641]: E0912 17:30:28.253958 2641 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:28.255388 kubelet[2641]: I0912 17:30:28.253131 2641 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:30:28.255462 kubelet[2641]: I0912 17:30:28.253917 2641 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:30:28.256873 kubelet[2641]: E0912 17:30:28.256835 2641 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:30:28.257117 kubelet[2641]: I0912 17:30:28.257027 2641 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:30:28.268080 kubelet[2641]: I0912 17:30:28.265932 2641 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:30:28.268080 kubelet[2641]: I0912 17:30:28.267155 2641 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:30:28.279370 kubelet[2641]: I0912 17:30:28.279257 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:30:28.284012 kubelet[2641]: I0912 17:30:28.283966 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:30:28.284012 kubelet[2641]: I0912 17:30:28.284002 2641 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:30:28.284237 kubelet[2641]: I0912 17:30:28.284037 2641 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:30:28.284237 kubelet[2641]: E0912 17:30:28.284095 2641 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:30:28.321461 kubelet[2641]: I0912 17:30:28.321431 2641 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:30:28.321461 kubelet[2641]: I0912 17:30:28.321454 2641 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:30:28.321600 kubelet[2641]: I0912 17:30:28.321489 2641 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:28.321708 kubelet[2641]: I0912 17:30:28.321690 2641 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:30:28.321747 kubelet[2641]: I0912 17:30:28.321707 2641 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:30:28.321747 kubelet[2641]: I0912 17:30:28.321728 2641 policy_none.go:49] "None policy: Start" Sep 12 17:30:28.322427 kubelet[2641]: I0912 17:30:28.322400 2641 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:30:28.322427 kubelet[2641]: I0912 17:30:28.322427 2641 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:30:28.322636 kubelet[2641]: I0912 17:30:28.322600 2641 state_mem.go:75] "Updated machine memory state" Sep 12 17:30:28.324655 kubelet[2641]: I0912 17:30:28.323815 2641 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:30:28.324655 kubelet[2641]: I0912 17:30:28.324016 2641 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:30:28.324655 kubelet[2641]: I0912 17:30:28.324029 2641 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:30:28.324655 kubelet[2641]: I0912 17:30:28.324254 2641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:30:28.396369 kubelet[2641]: E0912 17:30:28.396199 2641 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:28.429916 kubelet[2641]: I0912 17:30:28.429888 2641 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:30:28.440702 kubelet[2641]: I0912 17:30:28.440671 2641 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:30:28.440826 kubelet[2641]: I0912 17:30:28.440774 2641 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:30:28.556424 kubelet[2641]: I0912 17:30:28.556292 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:28.556424 kubelet[2641]: I0912 17:30:28.556339 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:28.556424 kubelet[2641]: I0912 17:30:28.556361 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:28.556424 kubelet[2641]: I0912 17:30:28.556379 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:28.556424 kubelet[2641]: I0912 17:30:28.556401 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:28.556619 kubelet[2641]: I0912 17:30:28.556416 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:28.556619 kubelet[2641]: I0912 17:30:28.556431 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:28.556619 kubelet[2641]: I0912 17:30:28.556445 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:28.556619 kubelet[2641]: I0912 17:30:28.556462 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:28.694281 kubelet[2641]: E0912 17:30:28.692893 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:28.694281 kubelet[2641]: E0912 17:30:28.694097 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:28.697483 kubelet[2641]: E0912 17:30:28.697361 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:29.239205 kubelet[2641]: I0912 17:30:29.239144 2641 apiserver.go:52] "Watching apiserver" Sep 12 17:30:29.255990 kubelet[2641]: I0912 17:30:29.255945 2641 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:30:29.298436 kubelet[2641]: E0912 17:30:29.298389 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:29.299418 kubelet[2641]: E0912 17:30:29.299384 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:29.315136 kubelet[2641]: E0912 17:30:29.315087 2641 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:29.315301 kubelet[2641]: E0912 17:30:29.315283 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:29.345994 kubelet[2641]: I0912 17:30:29.345045 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.344997238 podStartE2EDuration="1.344997238s" podCreationTimestamp="2025-09-12 17:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:29.329106875 +0000 UTC m=+1.161728346" watchObservedRunningTime="2025-09-12 17:30:29.344997238 +0000 UTC m=+1.177618709" Sep 12 17:30:29.357564 kubelet[2641]: I0912 17:30:29.357503 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.357485709 podStartE2EDuration="2.357485709s" podCreationTimestamp="2025-09-12 17:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:29.357441765 +0000 UTC m=+1.190063236" watchObservedRunningTime="2025-09-12 17:30:29.357485709 +0000 UTC m=+1.190107180" Sep 12 17:30:29.357955 kubelet[2641]: I0912 17:30:29.357822 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3577920350000001 podStartE2EDuration="1.357792035s" podCreationTimestamp="2025-09-12 17:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:29.346128537 +0000 UTC m=+1.178750008" watchObservedRunningTime="2025-09-12 17:30:29.357792035 +0000 UTC m=+1.190413506" Sep 12 17:30:30.301655 kubelet[2641]: E0912 17:30:30.301199 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:30.302084 kubelet[2641]: E0912 17:30:30.301761 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:33.607825 kubelet[2641]: I0912 17:30:33.607756 2641 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:30:33.608192 containerd[1548]: time="2025-09-12T17:30:33.608130615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:30:33.608405 kubelet[2641]: I0912 17:30:33.608320 2641 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:30:33.621560 kubelet[2641]: E0912 17:30:33.621526 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:34.088711 kubelet[2641]: I0912 17:30:34.088589 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1808dc6a-42e2-4c10-ace6-7f60afe8452f-kube-proxy\") pod \"kube-proxy-rcz5v\" (UID: \"1808dc6a-42e2-4c10-ace6-7f60afe8452f\") " pod="kube-system/kube-proxy-rcz5v" Sep 12 17:30:34.088711 kubelet[2641]: I0912 17:30:34.088632 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1808dc6a-42e2-4c10-ace6-7f60afe8452f-xtables-lock\") pod \"kube-proxy-rcz5v\" (UID: \"1808dc6a-42e2-4c10-ace6-7f60afe8452f\") " pod="kube-system/kube-proxy-rcz5v" Sep 12 17:30:34.088711 kubelet[2641]: I0912 17:30:34.088651 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1808dc6a-42e2-4c10-ace6-7f60afe8452f-lib-modules\") pod \"kube-proxy-rcz5v\" (UID: \"1808dc6a-42e2-4c10-ace6-7f60afe8452f\") " pod="kube-system/kube-proxy-rcz5v" Sep 12 17:30:34.088711 kubelet[2641]: I0912 17:30:34.088669 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nv6s\" (UniqueName: \"kubernetes.io/projected/1808dc6a-42e2-4c10-ace6-7f60afe8452f-kube-api-access-5nv6s\") pod \"kube-proxy-rcz5v\" (UID: \"1808dc6a-42e2-4c10-ace6-7f60afe8452f\") " pod="kube-system/kube-proxy-rcz5v" Sep 12 17:30:34.200987 kubelet[2641]: E0912 17:30:34.200678 2641 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:30:34.200987 kubelet[2641]: E0912 17:30:34.200708 2641 projected.go:194] Error preparing data for projected volume kube-api-access-5nv6s for pod kube-system/kube-proxy-rcz5v: configmap "kube-root-ca.crt" not found Sep 12 17:30:34.200987 kubelet[2641]: E0912 17:30:34.200784 2641 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1808dc6a-42e2-4c10-ace6-7f60afe8452f-kube-api-access-5nv6s podName:1808dc6a-42e2-4c10-ace6-7f60afe8452f nodeName:}" failed. No retries permitted until 2025-09-12 17:30:34.700762359 +0000 UTC m=+6.533383830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5nv6s" (UniqueName: "kubernetes.io/projected/1808dc6a-42e2-4c10-ace6-7f60afe8452f-kube-api-access-5nv6s") pod "kube-proxy-rcz5v" (UID: "1808dc6a-42e2-4c10-ace6-7f60afe8452f") : configmap "kube-root-ca.crt" not found Sep 12 17:30:34.308006 kubelet[2641]: E0912 17:30:34.307949 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:34.870751 kubelet[2641]: E0912 17:30:34.869950 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:34.871177 containerd[1548]: time="2025-09-12T17:30:34.870989471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rcz5v,Uid:1808dc6a-42e2-4c10-ace6-7f60afe8452f,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:34.892607 containerd[1548]: time="2025-09-12T17:30:34.892104871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:30:34.892607 containerd[1548]: time="2025-09-12T17:30:34.892576159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:30:34.892607 containerd[1548]: time="2025-09-12T17:30:34.892588518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:34.892786 containerd[1548]: time="2025-09-12T17:30:34.892684792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:34.893893 kubelet[2641]: I0912 17:30:34.893665 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9z7\" (UniqueName: \"kubernetes.io/projected/02517692-ae50-4ac7-91ce-88c474cb4478-kube-api-access-7d9z7\") pod \"tigera-operator-58fc44c59b-mmjkl\" (UID: \"02517692-ae50-4ac7-91ce-88c474cb4478\") " pod="tigera-operator/tigera-operator-58fc44c59b-mmjkl" Sep 12 17:30:34.893893 kubelet[2641]: I0912 17:30:34.893707 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02517692-ae50-4ac7-91ce-88c474cb4478-var-lib-calico\") pod \"tigera-operator-58fc44c59b-mmjkl\" (UID: \"02517692-ae50-4ac7-91ce-88c474cb4478\") " pod="tigera-operator/tigera-operator-58fc44c59b-mmjkl" Sep 12 17:30:34.932261 containerd[1548]: time="2025-09-12T17:30:34.932225216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rcz5v,Uid:1808dc6a-42e2-4c10-ace6-7f60afe8452f,Namespace:kube-system,Attempt:0,} returns sandbox id \"59dda9f1da7bb54d166ec6f3bd2234b55f65c6ac4d26c24ea572d06171645a6c\"" Sep 12 17:30:34.933161 kubelet[2641]: E0912 17:30:34.933132 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:34.935669 containerd[1548]: time="2025-09-12T17:30:34.935568588Z" level=info msg="CreateContainer within sandbox \"59dda9f1da7bb54d166ec6f3bd2234b55f65c6ac4d26c24ea572d06171645a6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:30:34.953878 containerd[1548]: time="2025-09-12T17:30:34.953827424Z" level=info msg="CreateContainer within sandbox \"59dda9f1da7bb54d166ec6f3bd2234b55f65c6ac4d26c24ea572d06171645a6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c67d48e2af1e93816572e8d5dcce047d6f01e8609c9215b1993ce645e25dd389\"" Sep 12 17:30:34.955750 containerd[1548]: time="2025-09-12T17:30:34.954470340Z" level=info msg="StartContainer for \"c67d48e2af1e93816572e8d5dcce047d6f01e8609c9215b1993ce645e25dd389\"" Sep 12 17:30:35.021612 containerd[1548]: time="2025-09-12T17:30:35.021563402Z" level=info msg="StartContainer for \"c67d48e2af1e93816572e8d5dcce047d6f01e8609c9215b1993ce645e25dd389\" returns successfully" Sep 12 17:30:35.300524 containerd[1548]: time="2025-09-12T17:30:35.300369779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-mmjkl,Uid:02517692-ae50-4ac7-91ce-88c474cb4478,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:30:35.312968 kubelet[2641]: E0912 17:30:35.312935 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:35.332397 containerd[1548]: time="2025-09-12T17:30:35.332315238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:30:35.332397 containerd[1548]: time="2025-09-12T17:30:35.332372915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:30:35.332397 containerd[1548]: time="2025-09-12T17:30:35.332384954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:35.332675 containerd[1548]: time="2025-09-12T17:30:35.332478428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:35.381250 containerd[1548]: time="2025-09-12T17:30:35.381169967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-mmjkl,Uid:02517692-ae50-4ac7-91ce-88c474cb4478,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"446494dd7f770259c026f70ec060ed5fc2091dbf0f5df9bb52bb69f862a9466c\"" Sep 12 17:30:35.383306 containerd[1548]: time="2025-09-12T17:30:35.383273592Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:30:36.589794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2896724715.mount: Deactivated successfully. Sep 12 17:30:37.209315 containerd[1548]: time="2025-09-12T17:30:37.209113363Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:37.210579 containerd[1548]: time="2025-09-12T17:30:37.210538440Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:30:37.212519 containerd[1548]: time="2025-09-12T17:30:37.212478808Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:37.215659 containerd[1548]: time="2025-09-12T17:30:37.215609587Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:37.216911 containerd[1548]: time="2025-09-12T17:30:37.216858155Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.833177429s" Sep 12 17:30:37.216911 containerd[1548]: time="2025-09-12T17:30:37.216900192Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:30:37.220954 containerd[1548]: time="2025-09-12T17:30:37.220918320Z" level=info msg="CreateContainer within sandbox \"446494dd7f770259c026f70ec060ed5fc2091dbf0f5df9bb52bb69f862a9466c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:30:37.237560 containerd[1548]: time="2025-09-12T17:30:37.237484442Z" level=info msg="CreateContainer within sandbox \"446494dd7f770259c026f70ec060ed5fc2091dbf0f5df9bb52bb69f862a9466c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a60b552378c8b010012ddc17c1e69559bf2a6e09f784acbc52f035130eba520e\"" Sep 12 17:30:37.238140 containerd[1548]: time="2025-09-12T17:30:37.238037650Z" level=info msg="StartContainer for \"a60b552378c8b010012ddc17c1e69559bf2a6e09f784acbc52f035130eba520e\"" Sep 12 17:30:37.287068 containerd[1548]: time="2025-09-12T17:30:37.287003817Z" level=info msg="StartContainer for \"a60b552378c8b010012ddc17c1e69559bf2a6e09f784acbc52f035130eba520e\" returns successfully" Sep 12 17:30:37.328978 kubelet[2641]: I0912 17:30:37.328699 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rcz5v" podStartSLOduration=4.328682367 podStartE2EDuration="4.328682367s" podCreationTimestamp="2025-09-12 17:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:35.331336862 +0000 UTC m=+7.163958413" watchObservedRunningTime="2025-09-12 17:30:37.328682367 +0000 UTC m=+9.161303838" Sep 12 17:30:38.181078 kubelet[2641]: E0912 17:30:38.180990 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:38.206297 kubelet[2641]: I0912 17:30:38.205221 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-mmjkl" podStartSLOduration=2.369382417 podStartE2EDuration="4.205202366s" podCreationTimestamp="2025-09-12 17:30:34 +0000 UTC" firstStartedPulling="2025-09-12 17:30:35.382591236 +0000 UTC m=+7.215212707" lastFinishedPulling="2025-09-12 17:30:37.218411185 +0000 UTC m=+9.051032656" observedRunningTime="2025-09-12 17:30:37.331887901 +0000 UTC m=+9.164509372" watchObservedRunningTime="2025-09-12 17:30:38.205202366 +0000 UTC m=+10.037823837" Sep 12 17:30:38.319334 kubelet[2641]: E0912 17:30:38.319303 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:39.772937 kubelet[2641]: E0912 17:30:39.772895 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:42.065058 update_engine[1536]: I20250912 17:30:42.064965 1536 update_attempter.cc:509] Updating boot flags... Sep 12 17:30:42.130438 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3025) Sep 12 17:30:42.171915 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3026) Sep 12 17:30:42.196913 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3026) Sep 12 17:30:42.681326 sudo[1755]: pam_unix(sudo:session): session closed for user root Sep 12 17:30:42.685512 sshd[1748]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:42.690060 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:45766.service: Deactivated successfully. Sep 12 17:30:42.695212 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:30:42.695528 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:30:42.699473 systemd-logind[1530]: Removed session 7. Sep 12 17:30:48.897413 kubelet[2641]: I0912 17:30:48.897266 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2dz\" (UniqueName: \"kubernetes.io/projected/6c7cf327-dca9-40a7-9648-2c3862cbe306-kube-api-access-tg2dz\") pod \"calico-typha-847b595dd7-2g4md\" (UID: \"6c7cf327-dca9-40a7-9648-2c3862cbe306\") " pod="calico-system/calico-typha-847b595dd7-2g4md" Sep 12 17:30:48.897413 kubelet[2641]: I0912 17:30:48.897316 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c7cf327-dca9-40a7-9648-2c3862cbe306-tigera-ca-bundle\") pod \"calico-typha-847b595dd7-2g4md\" (UID: \"6c7cf327-dca9-40a7-9648-2c3862cbe306\") " pod="calico-system/calico-typha-847b595dd7-2g4md" Sep 12 17:30:48.897413 kubelet[2641]: I0912 17:30:48.897342 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6c7cf327-dca9-40a7-9648-2c3862cbe306-typha-certs\") pod \"calico-typha-847b595dd7-2g4md\" (UID: \"6c7cf327-dca9-40a7-9648-2c3862cbe306\") " pod="calico-system/calico-typha-847b595dd7-2g4md" Sep 12 17:30:49.063228 kubelet[2641]: E0912 17:30:49.063179 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:49.065813 containerd[1548]: time="2025-09-12T17:30:49.065751652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847b595dd7-2g4md,Uid:6c7cf327-dca9-40a7-9648-2c3862cbe306,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:49.099978 kubelet[2641]: I0912 17:30:49.099940 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-cni-bin-dir\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.100513 containerd[1548]: time="2025-09-12T17:30:49.100221476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:30:49.100513 containerd[1548]: time="2025-09-12T17:30:49.100280274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:30:49.100513 containerd[1548]: time="2025-09-12T17:30:49.100316753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:49.101115 kubelet[2641]: I0912 17:30:49.100668 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-lib-modules\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101115 kubelet[2641]: I0912 17:30:49.100735 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-xtables-lock\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101115 kubelet[2641]: I0912 17:30:49.100753 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-var-run-calico\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101115 kubelet[2641]: I0912 17:30:49.100792 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-cni-log-dir\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101115 kubelet[2641]: I0912 17:30:49.100839 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-cni-net-dir\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101286 kubelet[2641]: I0912 17:30:49.100878 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-node-certs\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101286 kubelet[2641]: I0912 17:30:49.100912 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89kp7\" (UniqueName: \"kubernetes.io/projected/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-kube-api-access-89kp7\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101286 kubelet[2641]: I0912 17:30:49.100996 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-policysync\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101286 kubelet[2641]: I0912 17:30:49.101014 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-var-lib-calico\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101286 kubelet[2641]: I0912 17:30:49.101032 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-flexvol-driver-host\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.101391 kubelet[2641]: I0912 17:30:49.101061 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbef1785-dfb4-4b69-b13a-a36a880ef0cb-tigera-ca-bundle\") pod \"calico-node-vdk56\" (UID: \"fbef1785-dfb4-4b69-b13a-a36a880ef0cb\") " pod="calico-system/calico-node-vdk56" Sep 12 17:30:49.102774 containerd[1548]: time="2025-09-12T17:30:49.102720277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:49.182554 kubelet[2641]: E0912 17:30:49.182424 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2lbd" podUID="09e6a9f7-4303-4b5f-ad99-a3e9b65f6620" Sep 12 17:30:49.191616 containerd[1548]: time="2025-09-12T17:30:49.191425178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847b595dd7-2g4md,Uid:6c7cf327-dca9-40a7-9648-2c3862cbe306,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8516904fb99133b53def37fb29650dcfba8a7e4e8d59710b9ca352f2ee930b0\"" Sep 12 17:30:49.196029 kubelet[2641]: E0912 17:30:49.195630 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:49.199099 containerd[1548]: time="2025-09-12T17:30:49.199061775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:30:49.204854 kubelet[2641]: I0912 17:30:49.202502 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09e6a9f7-4303-4b5f-ad99-a3e9b65f6620-kubelet-dir\") pod \"csi-node-driver-t2lbd\" (UID: \"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620\") " pod="calico-system/csi-node-driver-t2lbd" Sep 12 17:30:49.204854 kubelet[2641]: I0912 17:30:49.202576 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsptz\" (UniqueName: \"kubernetes.io/projected/09e6a9f7-4303-4b5f-ad99-a3e9b65f6620-kube-api-access-lsptz\") pod \"csi-node-driver-t2lbd\" (UID: \"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620\") " pod="calico-system/csi-node-driver-t2lbd" Sep 12 17:30:49.204854 kubelet[2641]: I0912 17:30:49.202614 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09e6a9f7-4303-4b5f-ad99-a3e9b65f6620-socket-dir\") pod \"csi-node-driver-t2lbd\" (UID: \"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620\") " pod="calico-system/csi-node-driver-t2lbd" Sep 12 17:30:49.204854 kubelet[2641]: I0912 17:30:49.202658 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/09e6a9f7-4303-4b5f-ad99-a3e9b65f6620-varrun\") pod \"csi-node-driver-t2lbd\" (UID: \"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620\") " pod="calico-system/csi-node-driver-t2lbd" Sep 12 17:30:49.204854 kubelet[2641]: I0912 17:30:49.202682 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09e6a9f7-4303-4b5f-ad99-a3e9b65f6620-registration-dir\") pod \"csi-node-driver-t2lbd\" (UID: \"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620\") " pod="calico-system/csi-node-driver-t2lbd" Sep 12 17:30:49.214576 kubelet[2641]: E0912 17:30:49.214548 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.214725 kubelet[2641]: W0912 17:30:49.214709 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.214790 kubelet[2641]: E0912 17:30:49.214777 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.219690 kubelet[2641]: E0912 17:30:49.219669 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.219828 kubelet[2641]: W0912 17:30:49.219812 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.219900 kubelet[2641]: E0912 17:30:49.219869 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.303583 kubelet[2641]: E0912 17:30:49.303549 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.303583 kubelet[2641]: W0912 17:30:49.303573 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.303728 kubelet[2641]: E0912 17:30:49.303594 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.305249 kubelet[2641]: E0912 17:30:49.305212 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.305249 kubelet[2641]: W0912 17:30:49.305233 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.305569 kubelet[2641]: E0912 17:30:49.305329 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.305606 kubelet[2641]: E0912 17:30:49.305567 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.305606 kubelet[2641]: W0912 17:30:49.305579 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.305606 kubelet[2641]: E0912 17:30:49.305592 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.307053 kubelet[2641]: E0912 17:30:49.306893 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.307053 kubelet[2641]: W0912 17:30:49.306920 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.307053 kubelet[2641]: E0912 17:30:49.306943 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.307263 kubelet[2641]: E0912 17:30:49.307248 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.307294 kubelet[2641]: W0912 17:30:49.307263 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.307332 kubelet[2641]: E0912 17:30:49.307318 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.307769 kubelet[2641]: E0912 17:30:49.307753 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.307769 kubelet[2641]: W0912 17:30:49.307764 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.307879 kubelet[2641]: E0912 17:30:49.307867 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.307971 kubelet[2641]: E0912 17:30:49.307960 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.307997 kubelet[2641]: W0912 17:30:49.307971 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.308100 kubelet[2641]: E0912 17:30:49.308085 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.308270 kubelet[2641]: E0912 17:30:49.308148 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.308270 kubelet[2641]: W0912 17:30:49.308156 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.308270 kubelet[2641]: E0912 17:30:49.308167 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.308425 kubelet[2641]: E0912 17:30:49.308411 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.308425 kubelet[2641]: W0912 17:30:49.308420 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.308479 kubelet[2641]: E0912 17:30:49.308437 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.308709 kubelet[2641]: E0912 17:30:49.308694 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.308709 kubelet[2641]: W0912 17:30:49.308707 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.308776 kubelet[2641]: E0912 17:30:49.308719 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.308962 kubelet[2641]: E0912 17:30:49.308948 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.309014 kubelet[2641]: W0912 17:30:49.308963 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.309014 kubelet[2641]: E0912 17:30:49.308976 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.309164 kubelet[2641]: E0912 17:30:49.309152 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.309164 kubelet[2641]: W0912 17:30:49.309163 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.309227 kubelet[2641]: E0912 17:30:49.309213 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.309341 kubelet[2641]: E0912 17:30:49.309330 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.309341 kubelet[2641]: W0912 17:30:49.309339 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.309414 kubelet[2641]: E0912 17:30:49.309401 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.309468 kubelet[2641]: E0912 17:30:49.309459 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.309468 kubelet[2641]: W0912 17:30:49.309467 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.309570 kubelet[2641]: E0912 17:30:49.309539 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.309606 kubelet[2641]: E0912 17:30:49.309593 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.309606 kubelet[2641]: W0912 17:30:49.309600 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.309658 kubelet[2641]: E0912 17:30:49.309612 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.309758 kubelet[2641]: E0912 17:30:49.309739 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.309758 kubelet[2641]: W0912 17:30:49.309748 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.309758 kubelet[2641]: E0912 17:30:49.309757 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.309918 kubelet[2641]: E0912 17:30:49.309908 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.309918 kubelet[2641]: W0912 17:30:49.309918 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.309964 kubelet[2641]: E0912 17:30:49.309928 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.310202 kubelet[2641]: E0912 17:30:49.310146 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.310202 kubelet[2641]: W0912 17:30:49.310166 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.310202 kubelet[2641]: E0912 17:30:49.310185 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.310344 kubelet[2641]: E0912 17:30:49.310331 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.310344 kubelet[2641]: W0912 17:30:49.310342 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.310395 kubelet[2641]: E0912 17:30:49.310354 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.310594 kubelet[2641]: E0912 17:30:49.310581 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.310594 kubelet[2641]: W0912 17:30:49.310593 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.310721 kubelet[2641]: E0912 17:30:49.310661 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.310760 kubelet[2641]: E0912 17:30:49.310750 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.310760 kubelet[2641]: W0912 17:30:49.310756 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.310828 kubelet[2641]: E0912 17:30:49.310793 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.312113 kubelet[2641]: E0912 17:30:49.310947 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.312113 kubelet[2641]: W0912 17:30:49.310958 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.312113 kubelet[2641]: E0912 17:30:49.310971 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.312113 kubelet[2641]: E0912 17:30:49.311681 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.312113 kubelet[2641]: W0912 17:30:49.311697 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.312113 kubelet[2641]: E0912 17:30:49.311713 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.313583 kubelet[2641]: E0912 17:30:49.313320 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.313583 kubelet[2641]: W0912 17:30:49.313341 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.313583 kubelet[2641]: E0912 17:30:49.313355 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.313773 containerd[1548]: time="2025-09-12T17:30:49.313573736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdk56,Uid:fbef1785-dfb4-4b69-b13a-a36a880ef0cb,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:49.315373 kubelet[2641]: E0912 17:30:49.315351 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.315373 kubelet[2641]: W0912 17:30:49.315370 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.315450 kubelet[2641]: E0912 17:30:49.315392 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.327420 kubelet[2641]: E0912 17:30:49.327388 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:49.327420 kubelet[2641]: W0912 17:30:49.327412 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:49.327566 kubelet[2641]: E0912 17:30:49.327432 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:49.341773 containerd[1548]: time="2025-09-12T17:30:49.341683803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:30:49.342067 containerd[1548]: time="2025-09-12T17:30:49.341959754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:30:49.342067 containerd[1548]: time="2025-09-12T17:30:49.342011273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:49.342368 containerd[1548]: time="2025-09-12T17:30:49.342292904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:30:49.395298 containerd[1548]: time="2025-09-12T17:30:49.395220622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdk56,Uid:fbef1785-dfb4-4b69-b13a-a36a880ef0cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3cdf50590ef0e54355179df8c72eb07d2d10c6273434312cac952a89d8faadc\"" Sep 12 17:30:50.389229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807622487.mount: Deactivated successfully. Sep 12 17:30:50.844927 containerd[1548]: time="2025-09-12T17:30:50.844880169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:50.845833 containerd[1548]: time="2025-09-12T17:30:50.845766342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:30:50.846845 containerd[1548]: time="2025-09-12T17:30:50.846696314Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:50.849100 containerd[1548]: time="2025-09-12T17:30:50.849058642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:50.850261 containerd[1548]: time="2025-09-12T17:30:50.850233486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.650741685s" Sep 12 17:30:50.850350 containerd[1548]: time="2025-09-12T17:30:50.850265165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:30:50.851105 containerd[1548]: time="2025-09-12T17:30:50.851056541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:30:50.867874 containerd[1548]: time="2025-09-12T17:30:50.867827992Z" level=info msg="CreateContainer within sandbox \"b8516904fb99133b53def37fb29650dcfba8a7e4e8d59710b9ca352f2ee930b0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:30:50.880766 containerd[1548]: time="2025-09-12T17:30:50.880718400Z" level=info msg="CreateContainer within sandbox \"b8516904fb99133b53def37fb29650dcfba8a7e4e8d59710b9ca352f2ee930b0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c324aed3bf5e87060d74ac1f8057da32bceeea8d7127f561a80fa21ed3f32e6d\"" Sep 12 17:30:50.883064 containerd[1548]: time="2025-09-12T17:30:50.882994371Z" level=info msg="StartContainer for \"c324aed3bf5e87060d74ac1f8057da32bceeea8d7127f561a80fa21ed3f32e6d\"" Sep 12 17:30:50.985175 containerd[1548]: time="2025-09-12T17:30:50.985119508Z" level=info msg="StartContainer for \"c324aed3bf5e87060d74ac1f8057da32bceeea8d7127f561a80fa21ed3f32e6d\" returns successfully" Sep 12 17:30:51.284868 kubelet[2641]: E0912 17:30:51.284825 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2lbd" podUID="09e6a9f7-4303-4b5f-ad99-a3e9b65f6620" Sep 12 17:30:51.355522 kubelet[2641]: E0912 17:30:51.355487 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:51.417736 kubelet[2641]: E0912 17:30:51.417710 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.417736 kubelet[2641]: W0912 17:30:51.417730 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.417736 kubelet[2641]: E0912 17:30:51.417750 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.418015 kubelet[2641]: E0912 17:30:51.417921 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.418015 kubelet[2641]: W0912 17:30:51.417928 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.418015 kubelet[2641]: E0912 17:30:51.417936 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.418186 kubelet[2641]: E0912 17:30:51.418175 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.418186 kubelet[2641]: W0912 17:30:51.418185 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.418256 kubelet[2641]: E0912 17:30:51.418194 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.418358 kubelet[2641]: E0912 17:30:51.418348 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.418358 kubelet[2641]: W0912 17:30:51.418357 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.418427 kubelet[2641]: E0912 17:30:51.418365 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.418507 kubelet[2641]: E0912 17:30:51.418498 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.418507 kubelet[2641]: W0912 17:30:51.418507 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.418577 kubelet[2641]: E0912 17:30:51.418519 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.418647 kubelet[2641]: E0912 17:30:51.418638 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.418647 kubelet[2641]: W0912 17:30:51.418647 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.418724 kubelet[2641]: E0912 17:30:51.418654 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.418782 kubelet[2641]: E0912 17:30:51.418773 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.418782 kubelet[2641]: W0912 17:30:51.418782 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.418873 kubelet[2641]: E0912 17:30:51.418788 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.418964 kubelet[2641]: E0912 17:30:51.418955 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.418964 kubelet[2641]: W0912 17:30:51.418964 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.419036 kubelet[2641]: E0912 17:30:51.418972 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.419156 kubelet[2641]: E0912 17:30:51.419146 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.419210 kubelet[2641]: W0912 17:30:51.419156 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.419210 kubelet[2641]: E0912 17:30:51.419164 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.419297 kubelet[2641]: E0912 17:30:51.419288 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.419297 kubelet[2641]: W0912 17:30:51.419297 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.419365 kubelet[2641]: E0912 17:30:51.419304 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.419433 kubelet[2641]: E0912 17:30:51.419424 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.419433 kubelet[2641]: W0912 17:30:51.419433 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.419503 kubelet[2641]: E0912 17:30:51.419440 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.419565 kubelet[2641]: E0912 17:30:51.419557 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.419565 kubelet[2641]: W0912 17:30:51.419564 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.419632 kubelet[2641]: E0912 17:30:51.419572 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.419699 kubelet[2641]: E0912 17:30:51.419690 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.419699 kubelet[2641]: W0912 17:30:51.419699 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.419766 kubelet[2641]: E0912 17:30:51.419705 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.419852 kubelet[2641]: E0912 17:30:51.419842 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.419852 kubelet[2641]: W0912 17:30:51.419851 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.419924 kubelet[2641]: E0912 17:30:51.419858 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.419992 kubelet[2641]: E0912 17:30:51.419983 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.420027 kubelet[2641]: W0912 17:30:51.419992 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.420027 kubelet[2641]: E0912 17:30:51.419999 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.424503 kubelet[2641]: E0912 17:30:51.424367 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.424503 kubelet[2641]: W0912 17:30:51.424384 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.424503 kubelet[2641]: E0912 17:30:51.424406 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.424891 kubelet[2641]: E0912 17:30:51.424824 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.424891 kubelet[2641]: W0912 17:30:51.424838 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.424891 kubelet[2641]: E0912 17:30:51.424857 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.425111 kubelet[2641]: E0912 17:30:51.425081 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.425111 kubelet[2641]: W0912 17:30:51.425109 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.425206 kubelet[2641]: E0912 17:30:51.425126 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.425327 kubelet[2641]: E0912 17:30:51.425317 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.425327 kubelet[2641]: W0912 17:30:51.425328 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.425327 kubelet[2641]: E0912 17:30:51.425353 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.425578 kubelet[2641]: E0912 17:30:51.425567 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.425578 kubelet[2641]: W0912 17:30:51.425578 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.425635 kubelet[2641]: E0912 17:30:51.425590 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.425882 kubelet[2641]: E0912 17:30:51.425866 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.425882 kubelet[2641]: W0912 17:30:51.425880 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.425963 kubelet[2641]: E0912 17:30:51.425893 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.426324 kubelet[2641]: E0912 17:30:51.426203 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.426324 kubelet[2641]: W0912 17:30:51.426220 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.426324 kubelet[2641]: E0912 17:30:51.426238 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.426490 kubelet[2641]: E0912 17:30:51.426478 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.426651 kubelet[2641]: W0912 17:30:51.426546 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.426651 kubelet[2641]: E0912 17:30:51.426572 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.426810 kubelet[2641]: E0912 17:30:51.426786 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.427011 kubelet[2641]: W0912 17:30:51.426868 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.427011 kubelet[2641]: E0912 17:30:51.426892 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.427167 kubelet[2641]: E0912 17:30:51.427153 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.427238 kubelet[2641]: W0912 17:30:51.427226 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.427309 kubelet[2641]: E0912 17:30:51.427298 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.427525 kubelet[2641]: E0912 17:30:51.427505 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.427525 kubelet[2641]: W0912 17:30:51.427520 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.427596 kubelet[2641]: E0912 17:30:51.427535 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.427707 kubelet[2641]: E0912 17:30:51.427692 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.427707 kubelet[2641]: W0912 17:30:51.427704 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.427821 kubelet[2641]: E0912 17:30:51.427715 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.427982 kubelet[2641]: E0912 17:30:51.427940 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.427982 kubelet[2641]: W0912 17:30:51.427950 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.427982 kubelet[2641]: E0912 17:30:51.427967 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.428160 kubelet[2641]: E0912 17:30:51.428146 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.428160 kubelet[2641]: W0912 17:30:51.428157 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.428216 kubelet[2641]: E0912 17:30:51.428168 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.428333 kubelet[2641]: E0912 17:30:51.428321 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.428333 kubelet[2641]: W0912 17:30:51.428331 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.428406 kubelet[2641]: E0912 17:30:51.428342 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.428509 kubelet[2641]: E0912 17:30:51.428498 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.428509 kubelet[2641]: W0912 17:30:51.428507 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.428565 kubelet[2641]: E0912 17:30:51.428516 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.428796 kubelet[2641]: E0912 17:30:51.428785 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.428796 kubelet[2641]: W0912 17:30:51.428796 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.428906 kubelet[2641]: E0912 17:30:51.428823 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.429002 kubelet[2641]: E0912 17:30:51.428992 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:51.429002 kubelet[2641]: W0912 17:30:51.429002 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:51.429101 kubelet[2641]: E0912 17:30:51.429011 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:51.796687 containerd[1548]: time="2025-09-12T17:30:51.796629888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:51.797541 containerd[1548]: time="2025-09-12T17:30:51.797416546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:30:51.798257 containerd[1548]: time="2025-09-12T17:30:51.798201003Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:51.801373 containerd[1548]: time="2025-09-12T17:30:51.800863525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:51.801712 containerd[1548]: time="2025-09-12T17:30:51.801679062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 950.587281ms" Sep 12 17:30:51.801817 containerd[1548]: time="2025-09-12T17:30:51.801783379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:30:51.805961 containerd[1548]: time="2025-09-12T17:30:51.805928738Z" level=info msg="CreateContainer within sandbox \"f3cdf50590ef0e54355179df8c72eb07d2d10c6273434312cac952a89d8faadc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:30:51.819653 containerd[1548]: time="2025-09-12T17:30:51.819536742Z" level=info msg="CreateContainer within sandbox \"f3cdf50590ef0e54355179df8c72eb07d2d10c6273434312cac952a89d8faadc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d2f0f304e41d51d05a8c8b85b2d807a847c2ecccd1f4580cee28f5ec8a57bef\"" Sep 12 17:30:51.820648 containerd[1548]: time="2025-09-12T17:30:51.820002289Z" level=info msg="StartContainer for \"3d2f0f304e41d51d05a8c8b85b2d807a847c2ecccd1f4580cee28f5ec8a57bef\"" Sep 12 17:30:51.888130 containerd[1548]: time="2025-09-12T17:30:51.888073909Z" level=info msg="StartContainer for \"3d2f0f304e41d51d05a8c8b85b2d807a847c2ecccd1f4580cee28f5ec8a57bef\" returns successfully" Sep 12 17:30:51.921397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d2f0f304e41d51d05a8c8b85b2d807a847c2ecccd1f4580cee28f5ec8a57bef-rootfs.mount: Deactivated successfully. Sep 12 17:30:51.940419 containerd[1548]: time="2025-09-12T17:30:51.936728894Z" level=info msg="shim disconnected" id=3d2f0f304e41d51d05a8c8b85b2d807a847c2ecccd1f4580cee28f5ec8a57bef namespace=k8s.io Sep 12 17:30:51.940419 containerd[1548]: time="2025-09-12T17:30:51.940223793Z" level=warning msg="cleaning up after shim disconnected" id=3d2f0f304e41d51d05a8c8b85b2d807a847c2ecccd1f4580cee28f5ec8a57bef namespace=k8s.io Sep 12 17:30:51.940419 containerd[1548]: time="2025-09-12T17:30:51.940236032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:30:52.361267 kubelet[2641]: I0912 17:30:52.361078 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:30:52.362523 kubelet[2641]: E0912 17:30:52.362082 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:52.362667 containerd[1548]: time="2025-09-12T17:30:52.362311841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:30:52.377855 kubelet[2641]: I0912 17:30:52.377774 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-847b595dd7-2g4md" podStartSLOduration=2.724391128 podStartE2EDuration="4.377755731s" podCreationTimestamp="2025-09-12 17:30:48 +0000 UTC" firstStartedPulling="2025-09-12 17:30:49.197577382 +0000 UTC m=+21.030198853" lastFinishedPulling="2025-09-12 17:30:50.850942025 +0000 UTC m=+22.683563456" observedRunningTime="2025-09-12 17:30:51.367289534 +0000 UTC m=+23.199911005" watchObservedRunningTime="2025-09-12 17:30:52.377755731 +0000 UTC m=+24.210377202" Sep 12 17:30:53.284463 kubelet[2641]: E0912 17:30:53.284398 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2lbd" podUID="09e6a9f7-4303-4b5f-ad99-a3e9b65f6620" Sep 12 17:30:54.212433 containerd[1548]: time="2025-09-12T17:30:54.212388443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:54.213495 containerd[1548]: time="2025-09-12T17:30:54.213296820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:30:54.214825 containerd[1548]: time="2025-09-12T17:30:54.214114279Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:54.217324 containerd[1548]: time="2025-09-12T17:30:54.216839169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:54.217400 containerd[1548]: time="2025-09-12T17:30:54.217339676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 1.854990597s" Sep 12 17:30:54.217400 containerd[1548]: time="2025-09-12T17:30:54.217375956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:30:54.219288 containerd[1548]: time="2025-09-12T17:30:54.219256987Z" level=info msg="CreateContainer within sandbox \"f3cdf50590ef0e54355179df8c72eb07d2d10c6273434312cac952a89d8faadc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:30:54.242084 containerd[1548]: time="2025-09-12T17:30:54.242033963Z" level=info msg="CreateContainer within sandbox \"f3cdf50590ef0e54355179df8c72eb07d2d10c6273434312cac952a89d8faadc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a1fc9109fb6d3c15f60cd6965bfd27717934ecc4ba222d8c0ad39d07ca66d4c4\"" Sep 12 17:30:54.242789 containerd[1548]: time="2025-09-12T17:30:54.242687507Z" level=info msg="StartContainer for \"a1fc9109fb6d3c15f60cd6965bfd27717934ecc4ba222d8c0ad39d07ca66d4c4\"" Sep 12 17:30:54.263200 systemd[1]: run-containerd-runc-k8s.io-a1fc9109fb6d3c15f60cd6965bfd27717934ecc4ba222d8c0ad39d07ca66d4c4-runc.LDm3eB.mount: Deactivated successfully. Sep 12 17:30:54.342011 containerd[1548]: time="2025-09-12T17:30:54.341960922Z" level=info msg="StartContainer for \"a1fc9109fb6d3c15f60cd6965bfd27717934ecc4ba222d8c0ad39d07ca66d4c4\" returns successfully" Sep 12 17:30:54.797826 containerd[1548]: time="2025-09-12T17:30:54.797748719Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:30:54.821931 containerd[1548]: time="2025-09-12T17:30:54.821856301Z" level=info msg="shim disconnected" id=a1fc9109fb6d3c15f60cd6965bfd27717934ecc4ba222d8c0ad39d07ca66d4c4 namespace=k8s.io Sep 12 17:30:54.821931 containerd[1548]: time="2025-09-12T17:30:54.821921820Z" level=warning msg="cleaning up after shim disconnected" id=a1fc9109fb6d3c15f60cd6965bfd27717934ecc4ba222d8c0ad39d07ca66d4c4 namespace=k8s.io Sep 12 17:30:54.821931 containerd[1548]: time="2025-09-12T17:30:54.821931179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:30:54.838019 kubelet[2641]: I0912 17:30:54.837977 2641 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:30:54.955969 kubelet[2641]: I0912 17:30:54.955898 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-backend-key-pair\") pod \"whisker-6777b9698d-87svx\" (UID: \"81976e56-dd02-491a-b629-59ec2cab5a05\") " pod="calico-system/whisker-6777b9698d-87svx" Sep 12 17:30:54.955969 kubelet[2641]: I0912 17:30:54.955957 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/824c5a12-bcb9-44ed-a3d8-24c299fba85d-calico-apiserver-certs\") pod \"calico-apiserver-75f496c6fb-7szgf\" (UID: \"824c5a12-bcb9-44ed-a3d8-24c299fba85d\") " pod="calico-apiserver/calico-apiserver-75f496c6fb-7szgf" Sep 12 17:30:54.955969 kubelet[2641]: I0912 17:30:54.955977 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-ca-bundle\") pod \"whisker-6777b9698d-87svx\" (UID: \"81976e56-dd02-491a-b629-59ec2cab5a05\") " pod="calico-system/whisker-6777b9698d-87svx" Sep 12 17:30:54.956346 kubelet[2641]: I0912 17:30:54.956007 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1786da27-c74b-428e-9360-4f44ff994f41-config-volume\") pod \"coredns-7c65d6cfc9-58qjr\" (UID: \"1786da27-c74b-428e-9360-4f44ff994f41\") " pod="kube-system/coredns-7c65d6cfc9-58qjr" Sep 12 17:30:54.956346 kubelet[2641]: I0912 17:30:54.956028 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxs4\" (UniqueName: \"kubernetes.io/projected/81976e56-dd02-491a-b629-59ec2cab5a05-kube-api-access-fvxs4\") pod \"whisker-6777b9698d-87svx\" (UID: \"81976e56-dd02-491a-b629-59ec2cab5a05\") " pod="calico-system/whisker-6777b9698d-87svx" Sep 12 17:30:54.956346 kubelet[2641]: I0912 17:30:54.956043 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft868\" (UniqueName: \"kubernetes.io/projected/89f3f547-52ae-4646-86ac-31102c426a8a-kube-api-access-ft868\") pod \"calico-kube-controllers-64d9df5885-5z5xb\" (UID: \"89f3f547-52ae-4646-86ac-31102c426a8a\") " pod="calico-system/calico-kube-controllers-64d9df5885-5z5xb" Sep 12 17:30:54.956346 kubelet[2641]: I0912 17:30:54.956070 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tg8b\" (UniqueName: \"kubernetes.io/projected/824c5a12-bcb9-44ed-a3d8-24c299fba85d-kube-api-access-4tg8b\") pod \"calico-apiserver-75f496c6fb-7szgf\" (UID: \"824c5a12-bcb9-44ed-a3d8-24c299fba85d\") " pod="calico-apiserver/calico-apiserver-75f496c6fb-7szgf" Sep 12 17:30:54.956346 kubelet[2641]: I0912 17:30:54.956104 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f3f547-52ae-4646-86ac-31102c426a8a-tigera-ca-bundle\") pod \"calico-kube-controllers-64d9df5885-5z5xb\" (UID: \"89f3f547-52ae-4646-86ac-31102c426a8a\") " pod="calico-system/calico-kube-controllers-64d9df5885-5z5xb" Sep 12 17:30:54.956483 kubelet[2641]: I0912 17:30:54.956122 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4mj\" (UniqueName: \"kubernetes.io/projected/e9d7de93-a50e-470c-992a-6e1a6cde9578-kube-api-access-qf4mj\") pod \"calico-apiserver-75f496c6fb-6462h\" (UID: \"e9d7de93-a50e-470c-992a-6e1a6cde9578\") " pod="calico-apiserver/calico-apiserver-75f496c6fb-6462h" Sep 12 17:30:54.956483 kubelet[2641]: I0912 17:30:54.956154 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcl4f\" (UniqueName: \"kubernetes.io/projected/1786da27-c74b-428e-9360-4f44ff994f41-kube-api-access-rcl4f\") pod \"coredns-7c65d6cfc9-58qjr\" (UID: \"1786da27-c74b-428e-9360-4f44ff994f41\") " pod="kube-system/coredns-7c65d6cfc9-58qjr" Sep 12 17:30:54.956483 kubelet[2641]: I0912 17:30:54.956185 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7e50a73-6884-4846-84af-b99c62b21ac0-config-volume\") pod \"coredns-7c65d6cfc9-tmx7c\" (UID: \"c7e50a73-6884-4846-84af-b99c62b21ac0\") " pod="kube-system/coredns-7c65d6cfc9-tmx7c" Sep 12 17:30:54.956483 kubelet[2641]: I0912 17:30:54.956201 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d33a2b08-c505-4cce-a314-e4b791e0c009-goldmane-key-pair\") pod \"goldmane-7988f88666-gz6qp\" (UID: \"d33a2b08-c505-4cce-a314-e4b791e0c009\") " pod="calico-system/goldmane-7988f88666-gz6qp" Sep 12 17:30:54.956483 kubelet[2641]: I0912 17:30:54.956218 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdz9\" (UniqueName: \"kubernetes.io/projected/d33a2b08-c505-4cce-a314-e4b791e0c009-kube-api-access-5pdz9\") pod \"goldmane-7988f88666-gz6qp\" (UID: \"d33a2b08-c505-4cce-a314-e4b791e0c009\") " pod="calico-system/goldmane-7988f88666-gz6qp" Sep 12 17:30:54.956592 kubelet[2641]: I0912 17:30:54.956235 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33a2b08-c505-4cce-a314-e4b791e0c009-config\") pod \"goldmane-7988f88666-gz6qp\" (UID: \"d33a2b08-c505-4cce-a314-e4b791e0c009\") " pod="calico-system/goldmane-7988f88666-gz6qp" Sep 12 17:30:54.956592 kubelet[2641]: I0912 17:30:54.956249 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33a2b08-c505-4cce-a314-e4b791e0c009-goldmane-ca-bundle\") pod \"goldmane-7988f88666-gz6qp\" (UID: \"d33a2b08-c505-4cce-a314-e4b791e0c009\") " pod="calico-system/goldmane-7988f88666-gz6qp" Sep 12 17:30:54.956592 kubelet[2641]: I0912 17:30:54.956263 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfnq\" (UniqueName: \"kubernetes.io/projected/c7e50a73-6884-4846-84af-b99c62b21ac0-kube-api-access-7lfnq\") pod \"coredns-7c65d6cfc9-tmx7c\" (UID: \"c7e50a73-6884-4846-84af-b99c62b21ac0\") " pod="kube-system/coredns-7c65d6cfc9-tmx7c" Sep 12 17:30:54.956592 kubelet[2641]: I0912 17:30:54.956281 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e9d7de93-a50e-470c-992a-6e1a6cde9578-calico-apiserver-certs\") pod \"calico-apiserver-75f496c6fb-6462h\" (UID: \"e9d7de93-a50e-470c-992a-6e1a6cde9578\") " pod="calico-apiserver/calico-apiserver-75f496c6fb-6462h" Sep 12 17:30:55.173020 kubelet[2641]: E0912 17:30:55.172889 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:55.173844 containerd[1548]: time="2025-09-12T17:30:55.173781655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58qjr,Uid:1786da27-c74b-428e-9360-4f44ff994f41,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:55.182169 kubelet[2641]: E0912 17:30:55.182133 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:55.183105 containerd[1548]: time="2025-09-12T17:30:55.182571039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmx7c,Uid:c7e50a73-6884-4846-84af-b99c62b21ac0,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:55.183105 containerd[1548]: time="2025-09-12T17:30:55.182759434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-6462h,Uid:e9d7de93-a50e-470c-992a-6e1a6cde9578,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:30:55.183794 containerd[1548]: time="2025-09-12T17:30:55.183434737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gz6qp,Uid:d33a2b08-c505-4cce-a314-e4b791e0c009,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:55.184207 containerd[1548]: time="2025-09-12T17:30:55.184043162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d9df5885-5z5xb,Uid:89f3f547-52ae-4646-86ac-31102c426a8a,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:55.190326 containerd[1548]: time="2025-09-12T17:30:55.190291288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-7szgf,Uid:824c5a12-bcb9-44ed-a3d8-24c299fba85d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:30:55.196215 containerd[1548]: time="2025-09-12T17:30:55.196180463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6777b9698d-87svx,Uid:81976e56-dd02-491a-b629-59ec2cab5a05,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:55.264563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1fc9109fb6d3c15f60cd6965bfd27717934ecc4ba222d8c0ad39d07ca66d4c4-rootfs.mount: Deactivated successfully. Sep 12 17:30:55.291094 containerd[1548]: time="2025-09-12T17:30:55.291035328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2lbd,Uid:09e6a9f7-4303-4b5f-ad99-a3e9b65f6620,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:55.367004 containerd[1548]: time="2025-09-12T17:30:55.366925779Z" level=error msg="Failed to destroy network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.367479 containerd[1548]: time="2025-09-12T17:30:55.367432767Z" level=error msg="encountered an error cleaning up failed sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.367534 containerd[1548]: time="2025-09-12T17:30:55.367499405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6777b9698d-87svx,Uid:81976e56-dd02-491a-b629-59ec2cab5a05,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.369823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f-shm.mount: Deactivated successfully. Sep 12 17:30:55.373308 kubelet[2641]: E0912 17:30:55.373147 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.376073 kubelet[2641]: E0912 17:30:55.376013 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6777b9698d-87svx" Sep 12 17:30:55.376168 kubelet[2641]: E0912 17:30:55.376078 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6777b9698d-87svx" Sep 12 17:30:55.376168 kubelet[2641]: E0912 17:30:55.376133 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6777b9698d-87svx_calico-system(81976e56-dd02-491a-b629-59ec2cab5a05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6777b9698d-87svx_calico-system(81976e56-dd02-491a-b629-59ec2cab5a05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6777b9698d-87svx" podUID="81976e56-dd02-491a-b629-59ec2cab5a05" Sep 12 17:30:55.389016 containerd[1548]: time="2025-09-12T17:30:55.388197015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:30:55.395374 containerd[1548]: time="2025-09-12T17:30:55.391130103Z" level=error msg="Failed to destroy network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.395374 containerd[1548]: time="2025-09-12T17:30:55.392962098Z" level=error msg="encountered an error cleaning up failed sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.395374 containerd[1548]: time="2025-09-12T17:30:55.393025056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmx7c,Uid:c7e50a73-6884-4846-84af-b99c62b21ac0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.395374 containerd[1548]: time="2025-09-12T17:30:55.394549379Z" level=info msg="StopPodSandbox for \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\"" Sep 12 17:30:55.395374 containerd[1548]: time="2025-09-12T17:30:55.395234442Z" level=info msg="Ensure that sandbox 41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f in task-service has been cleanup successfully" Sep 12 17:30:55.395585 kubelet[2641]: I0912 17:30:55.392098 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:30:55.395585 kubelet[2641]: E0912 17:30:55.394369 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.395585 kubelet[2641]: E0912 17:30:55.394408 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tmx7c" Sep 12 17:30:55.395585 kubelet[2641]: E0912 17:30:55.394429 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tmx7c" Sep 12 17:30:55.395695 kubelet[2641]: E0912 17:30:55.394464 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tmx7c_kube-system(c7e50a73-6884-4846-84af-b99c62b21ac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tmx7c_kube-system(c7e50a73-6884-4846-84af-b99c62b21ac0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tmx7c" podUID="c7e50a73-6884-4846-84af-b99c62b21ac0" Sep 12 17:30:55.421269 containerd[1548]: time="2025-09-12T17:30:55.421213602Z" level=error msg="Failed to destroy network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.423126 containerd[1548]: time="2025-09-12T17:30:55.422795483Z" level=error msg="encountered an error cleaning up failed sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.424878 containerd[1548]: time="2025-09-12T17:30:55.424831033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gz6qp,Uid:d33a2b08-c505-4cce-a314-e4b791e0c009,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.425130 kubelet[2641]: E0912 17:30:55.425092 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.425193 kubelet[2641]: E0912 17:30:55.425156 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gz6qp" Sep 12 17:30:55.425193 kubelet[2641]: E0912 17:30:55.425178 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gz6qp" Sep 12 17:30:55.425250 kubelet[2641]: E0912 17:30:55.425221 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-gz6qp_calico-system(d33a2b08-c505-4cce-a314-e4b791e0c009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-gz6qp_calico-system(d33a2b08-c505-4cce-a314-e4b791e0c009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gz6qp" podUID="d33a2b08-c505-4cce-a314-e4b791e0c009" Sep 12 17:30:55.431002 containerd[1548]: time="2025-09-12T17:30:55.430937483Z" level=error msg="Failed to destroy network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.431808 containerd[1548]: time="2025-09-12T17:30:55.431653825Z" level=error msg="encountered an error cleaning up failed sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.431952 containerd[1548]: time="2025-09-12T17:30:55.431922819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58qjr,Uid:1786da27-c74b-428e-9360-4f44ff994f41,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.432399 kubelet[2641]: E0912 17:30:55.432362 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.432467 kubelet[2641]: E0912 17:30:55.432429 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-58qjr" Sep 12 17:30:55.432467 kubelet[2641]: E0912 17:30:55.432449 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-58qjr" Sep 12 17:30:55.432545 kubelet[2641]: E0912 17:30:55.432489 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-58qjr_kube-system(1786da27-c74b-428e-9360-4f44ff994f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-58qjr_kube-system(1786da27-c74b-428e-9360-4f44ff994f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-58qjr" podUID="1786da27-c74b-428e-9360-4f44ff994f41" Sep 12 17:30:55.438226 containerd[1548]: time="2025-09-12T17:30:55.438016629Z" level=error msg="Failed to destroy network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.438542 containerd[1548]: time="2025-09-12T17:30:55.438500457Z" level=error msg="encountered an error cleaning up failed sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.438607 containerd[1548]: time="2025-09-12T17:30:55.438553855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-7szgf,Uid:824c5a12-bcb9-44ed-a3d8-24c299fba85d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.439261 kubelet[2641]: E0912 17:30:55.439214 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.439381 kubelet[2641]: E0912 17:30:55.439279 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f496c6fb-7szgf" Sep 12 17:30:55.439381 kubelet[2641]: E0912 17:30:55.439300 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f496c6fb-7szgf" Sep 12 17:30:55.439381 kubelet[2641]: E0912 17:30:55.439348 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75f496c6fb-7szgf_calico-apiserver(824c5a12-bcb9-44ed-a3d8-24c299fba85d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75f496c6fb-7szgf_calico-apiserver(824c5a12-bcb9-44ed-a3d8-24c299fba85d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f496c6fb-7szgf" podUID="824c5a12-bcb9-44ed-a3d8-24c299fba85d" Sep 12 17:30:55.444676 containerd[1548]: time="2025-09-12T17:30:55.444603826Z" level=error msg="Failed to destroy network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.445100 containerd[1548]: time="2025-09-12T17:30:55.445002817Z" level=error msg="encountered an error cleaning up failed sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.445154 containerd[1548]: time="2025-09-12T17:30:55.445127853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d9df5885-5z5xb,Uid:89f3f547-52ae-4646-86ac-31102c426a8a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.445394 kubelet[2641]: E0912 17:30:55.445339 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.445453 kubelet[2641]: E0912 17:30:55.445417 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d9df5885-5z5xb" Sep 12 17:30:55.445453 kubelet[2641]: E0912 17:30:55.445436 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d9df5885-5z5xb" Sep 12 17:30:55.445557 kubelet[2641]: E0912 17:30:55.445479 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64d9df5885-5z5xb_calico-system(89f3f547-52ae-4646-86ac-31102c426a8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64d9df5885-5z5xb_calico-system(89f3f547-52ae-4646-86ac-31102c426a8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d9df5885-5z5xb" podUID="89f3f547-52ae-4646-86ac-31102c426a8a" Sep 12 17:30:55.450291 containerd[1548]: time="2025-09-12T17:30:55.450229768Z" level=error msg="StopPodSandbox for \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\" failed" error="failed to destroy network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.450459 kubelet[2641]: E0912 17:30:55.450425 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:30:55.450526 kubelet[2641]: E0912 17:30:55.450487 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f"} Sep 12 17:30:55.450569 kubelet[2641]: E0912 17:30:55.450541 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81976e56-dd02-491a-b629-59ec2cab5a05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:55.450569 kubelet[2641]: E0912 17:30:55.450562 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81976e56-dd02-491a-b629-59ec2cab5a05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6777b9698d-87svx" podUID="81976e56-dd02-491a-b629-59ec2cab5a05" Sep 12 17:30:55.456780 containerd[1548]: time="2025-09-12T17:30:55.456720248Z" level=error msg="Failed to destroy network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.457119 containerd[1548]: time="2025-09-12T17:30:55.457093119Z" level=error msg="encountered an error cleaning up failed sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.457174 containerd[1548]: time="2025-09-12T17:30:55.457141278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-6462h,Uid:e9d7de93-a50e-470c-992a-6e1a6cde9578,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.457419 kubelet[2641]: E0912 17:30:55.457387 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.457506 kubelet[2641]: E0912 17:30:55.457439 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f496c6fb-6462h" Sep 12 17:30:55.457506 kubelet[2641]: E0912 17:30:55.457458 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f496c6fb-6462h" Sep 12 17:30:55.457585 kubelet[2641]: E0912 17:30:55.457504 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75f496c6fb-6462h_calico-apiserver(e9d7de93-a50e-470c-992a-6e1a6cde9578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75f496c6fb-6462h_calico-apiserver(e9d7de93-a50e-470c-992a-6e1a6cde9578)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f496c6fb-6462h" podUID="e9d7de93-a50e-470c-992a-6e1a6cde9578" Sep 12 17:30:55.461711 containerd[1548]: time="2025-09-12T17:30:55.461512490Z" level=error msg="Failed to destroy network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.462110 containerd[1548]: time="2025-09-12T17:30:55.462002158Z" level=error msg="encountered an error cleaning up failed sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.462110 containerd[1548]: time="2025-09-12T17:30:55.462060957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2lbd,Uid:09e6a9f7-4303-4b5f-ad99-a3e9b65f6620,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.463830 kubelet[2641]: E0912 17:30:55.462332 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:55.463830 kubelet[2641]: E0912 17:30:55.462389 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t2lbd" Sep 12 17:30:55.463830 kubelet[2641]: E0912 17:30:55.462406 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t2lbd" Sep 12 17:30:55.463973 kubelet[2641]: E0912 17:30:55.462447 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t2lbd_calico-system(09e6a9f7-4303-4b5f-ad99-a3e9b65f6620)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t2lbd_calico-system(09e6a9f7-4303-4b5f-ad99-a3e9b65f6620)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t2lbd" podUID="09e6a9f7-4303-4b5f-ad99-a3e9b65f6620" Sep 12 17:30:56.240335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515-shm.mount: Deactivated successfully. Sep 12 17:30:56.240487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26-shm.mount: Deactivated successfully. Sep 12 17:30:56.240567 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891-shm.mount: Deactivated successfully. Sep 12 17:30:56.240649 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c-shm.mount: Deactivated successfully. Sep 12 17:30:56.240733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52-shm.mount: Deactivated successfully. Sep 12 17:30:56.240842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa-shm.mount: Deactivated successfully. Sep 12 17:30:56.240924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d-shm.mount: Deactivated successfully. Sep 12 17:30:56.395948 kubelet[2641]: I0912 17:30:56.395316 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:30:56.396292 containerd[1548]: time="2025-09-12T17:30:56.395873369Z" level=info msg="StopPodSandbox for \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\"" Sep 12 17:30:56.396292 containerd[1548]: time="2025-09-12T17:30:56.396058045Z" level=info msg="Ensure that sandbox faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515 in task-service has been cleanup successfully" Sep 12 17:30:56.399253 kubelet[2641]: I0912 17:30:56.398767 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:30:56.399372 containerd[1548]: time="2025-09-12T17:30:56.399300408Z" level=info msg="StopPodSandbox for \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\"" Sep 12 17:30:56.399640 containerd[1548]: time="2025-09-12T17:30:56.399459764Z" level=info msg="Ensure that sandbox a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d in task-service has been cleanup successfully" Sep 12 17:30:56.400574 kubelet[2641]: I0912 17:30:56.400549 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:30:56.401402 containerd[1548]: time="2025-09-12T17:30:56.401298721Z" level=info msg="StopPodSandbox for \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\"" Sep 12 17:30:56.401574 containerd[1548]: time="2025-09-12T17:30:56.401486916Z" level=info msg="Ensure that sandbox 15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa in task-service has been cleanup successfully" Sep 12 17:30:56.415999 kubelet[2641]: I0912 17:30:56.414350 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:30:56.417656 containerd[1548]: time="2025-09-12T17:30:56.417335581Z" level=info msg="StopPodSandbox for \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\"" Sep 12 17:30:56.422748 containerd[1548]: time="2025-09-12T17:30:56.420026397Z" level=info msg="Ensure that sandbox 05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26 in task-service has been cleanup successfully" Sep 12 17:30:56.423652 kubelet[2641]: I0912 17:30:56.423550 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:30:56.426658 containerd[1548]: time="2025-09-12T17:30:56.426334368Z" level=info msg="StopPodSandbox for \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\"" Sep 12 17:30:56.426658 containerd[1548]: time="2025-09-12T17:30:56.426538043Z" level=info msg="Ensure that sandbox 230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891 in task-service has been cleanup successfully" Sep 12 17:30:56.428192 kubelet[2641]: I0912 17:30:56.428111 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:30:56.429721 containerd[1548]: time="2025-09-12T17:30:56.429669649Z" level=info msg="StopPodSandbox for \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\"" Sep 12 17:30:56.429901 containerd[1548]: time="2025-09-12T17:30:56.429875964Z" level=info msg="Ensure that sandbox 579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c in task-service has been cleanup successfully" Sep 12 17:30:56.437697 kubelet[2641]: I0912 17:30:56.437550 2641 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:30:56.447123 containerd[1548]: time="2025-09-12T17:30:56.447076797Z" level=info msg="StopPodSandbox for \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\"" Sep 12 17:30:56.448605 containerd[1548]: time="2025-09-12T17:30:56.447562865Z" level=info msg="Ensure that sandbox c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52 in task-service has been cleanup successfully" Sep 12 17:30:56.484741 containerd[1548]: time="2025-09-12T17:30:56.484672667Z" level=error msg="StopPodSandbox for \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\" failed" error="failed to destroy network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:56.484958 kubelet[2641]: E0912 17:30:56.484909 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:30:56.485020 kubelet[2641]: E0912 17:30:56.484972 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26"} Sep 12 17:30:56.485020 kubelet[2641]: E0912 17:30:56.485011 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"824c5a12-bcb9-44ed-a3d8-24c299fba85d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:56.485099 kubelet[2641]: E0912 17:30:56.485033 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"824c5a12-bcb9-44ed-a3d8-24c299fba85d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f496c6fb-7szgf" podUID="824c5a12-bcb9-44ed-a3d8-24c299fba85d" Sep 12 17:30:56.487266 containerd[1548]: time="2025-09-12T17:30:56.487227326Z" level=error msg="StopPodSandbox for \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\" failed" error="failed to destroy network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:56.489108 kubelet[2641]: E0912 17:30:56.489072 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:30:56.489168 kubelet[2641]: E0912 17:30:56.489121 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d"} Sep 12 17:30:56.489206 kubelet[2641]: E0912 17:30:56.489161 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d33a2b08-c505-4cce-a314-e4b791e0c009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:56.489206 kubelet[2641]: E0912 17:30:56.489186 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d33a2b08-c505-4cce-a314-e4b791e0c009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gz6qp" podUID="d33a2b08-c505-4cce-a314-e4b791e0c009" Sep 12 17:30:56.491750 containerd[1548]: time="2025-09-12T17:30:56.491652222Z" level=error msg="StopPodSandbox for \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\" failed" error="failed to destroy network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:56.493175 kubelet[2641]: E0912 17:30:56.493129 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:30:56.493250 kubelet[2641]: E0912 17:30:56.493187 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515"} Sep 12 17:30:56.493250 kubelet[2641]: E0912 17:30:56.493217 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:56.493250 kubelet[2641]: E0912 17:30:56.493237 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t2lbd" podUID="09e6a9f7-4303-4b5f-ad99-a3e9b65f6620" Sep 12 17:30:56.506741 containerd[1548]: time="2025-09-12T17:30:56.506695105Z" level=error msg="StopPodSandbox for \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\" failed" error="failed to destroy network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:56.507010 kubelet[2641]: E0912 17:30:56.506960 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:30:56.507059 kubelet[2641]: E0912 17:30:56.507013 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c"} Sep 12 17:30:56.507115 kubelet[2641]: E0912 17:30:56.507075 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7e50a73-6884-4846-84af-b99c62b21ac0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:56.507115 kubelet[2641]: E0912 17:30:56.507099 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7e50a73-6884-4846-84af-b99c62b21ac0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tmx7c" podUID="c7e50a73-6884-4846-84af-b99c62b21ac0" Sep 12 17:30:56.521103 containerd[1548]: time="2025-09-12T17:30:56.521057805Z" level=error msg="StopPodSandbox for \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\" failed" error="failed to destroy network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:56.521408 kubelet[2641]: E0912 17:30:56.521366 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:30:56.521474 kubelet[2641]: E0912 17:30:56.521421 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52"} Sep 12 17:30:56.521474 kubelet[2641]: E0912 17:30:56.521454 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9d7de93-a50e-470c-992a-6e1a6cde9578\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:56.521568 kubelet[2641]: E0912 17:30:56.521474 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9d7de93-a50e-470c-992a-6e1a6cde9578\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f496c6fb-6462h" podUID="e9d7de93-a50e-470c-992a-6e1a6cde9578" Sep 12 17:30:56.523389 containerd[1548]: time="2025-09-12T17:30:56.523259633Z" level=error msg="StopPodSandbox for \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\" failed" error="failed to destroy network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:56.523455 kubelet[2641]: E0912 17:30:56.523425 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:30:56.523509 kubelet[2641]: E0912 17:30:56.523453 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa"} Sep 12 17:30:56.523509 kubelet[2641]: E0912 17:30:56.523476 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1786da27-c74b-428e-9360-4f44ff994f41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:56.523509 kubelet[2641]: E0912 17:30:56.523494 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1786da27-c74b-428e-9360-4f44ff994f41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-58qjr" podUID="1786da27-c74b-428e-9360-4f44ff994f41" Sep 12 17:30:56.524093 containerd[1548]: time="2025-09-12T17:30:56.523986216Z" level=error msg="StopPodSandbox for \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\" failed" error="failed to destroy network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:56.524184 kubelet[2641]: E0912 17:30:56.524146 2641 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:30:56.524231 kubelet[2641]: E0912 17:30:56.524206 2641 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891"} Sep 12 17:30:56.524256 kubelet[2641]: E0912 17:30:56.524232 2641 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89f3f547-52ae-4646-86ac-31102c426a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:30:56.524295 kubelet[2641]: E0912 17:30:56.524254 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89f3f547-52ae-4646-86ac-31102c426a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d9df5885-5z5xb" podUID="89f3f547-52ae-4646-86ac-31102c426a8a" Sep 12 17:30:58.610472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845802240.mount: Deactivated successfully. Sep 12 17:30:58.777610 containerd[1548]: time="2025-09-12T17:30:58.777372400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:58.778215 containerd[1548]: time="2025-09-12T17:30:58.778178463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:30:58.780975 containerd[1548]: time="2025-09-12T17:30:58.780912283Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:58.793635 containerd[1548]: time="2025-09-12T17:30:58.793505126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:58.794099 containerd[1548]: time="2025-09-12T17:30:58.794055914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.40580882s" Sep 12 17:30:58.794099 containerd[1548]: time="2025-09-12T17:30:58.794094433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:30:58.824876 containerd[1548]: time="2025-09-12T17:30:58.824791159Z" level=info msg="CreateContainer within sandbox \"f3cdf50590ef0e54355179df8c72eb07d2d10c6273434312cac952a89d8faadc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:30:58.955589 containerd[1548]: time="2025-09-12T17:30:58.955450650Z" level=info msg="CreateContainer within sandbox \"f3cdf50590ef0e54355179df8c72eb07d2d10c6273434312cac952a89d8faadc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a623c1eb190c60c5dd3c913e1176b70a79f309682643553766afd48cb66c0d54\"" Sep 12 17:30:58.957398 containerd[1548]: time="2025-09-12T17:30:58.957343968Z" level=info msg="StartContainer for \"a623c1eb190c60c5dd3c913e1176b70a79f309682643553766afd48cb66c0d54\"" Sep 12 17:30:59.079230 containerd[1548]: time="2025-09-12T17:30:59.079182114Z" level=info msg="StartContainer for \"a623c1eb190c60c5dd3c913e1176b70a79f309682643553766afd48cb66c0d54\" returns successfully" Sep 12 17:30:59.137590 kubelet[2641]: I0912 17:30:59.137132 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:30:59.138212 kubelet[2641]: E0912 17:30:59.138135 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:59.198096 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:30:59.198213 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:30:59.374645 containerd[1548]: time="2025-09-12T17:30:59.374596856Z" level=info msg="StopPodSandbox for \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\"" Sep 12 17:30:59.453036 kubelet[2641]: E0912 17:30:59.447355 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:59.493846 kubelet[2641]: I0912 17:30:59.493102 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vdk56" podStartSLOduration=2.093095829 podStartE2EDuration="11.493083787s" podCreationTimestamp="2025-09-12 17:30:48 +0000 UTC" firstStartedPulling="2025-09-12 17:30:49.396353186 +0000 UTC m=+21.228974657" lastFinishedPulling="2025-09-12 17:30:58.796341144 +0000 UTC m=+30.628962615" observedRunningTime="2025-09-12 17:30:59.472250228 +0000 UTC m=+31.304871699" watchObservedRunningTime="2025-09-12 17:30:59.493083787 +0000 UTC m=+31.325705218" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.494 [INFO][3902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.495 [INFO][3902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" iface="eth0" netns="/var/run/netns/cni-4ac73dcf-9f7a-b443-18a8-611da129ec86" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.497 [INFO][3902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" iface="eth0" netns="/var/run/netns/cni-4ac73dcf-9f7a-b443-18a8-611da129ec86" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.498 [INFO][3902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" iface="eth0" netns="/var/run/netns/cni-4ac73dcf-9f7a-b443-18a8-611da129ec86" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.498 [INFO][3902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.498 [INFO][3902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.592 [INFO][3912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.592 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.592 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.607 [WARNING][3912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.607 [INFO][3912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.610 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:30:59.616101 containerd[1548]: 2025-09-12 17:30:59.613 [INFO][3902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:30:59.617656 containerd[1548]: time="2025-09-12T17:30:59.616248578Z" level=info msg="TearDown network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\" successfully" Sep 12 17:30:59.617656 containerd[1548]: time="2025-09-12T17:30:59.616278977Z" level=info msg="StopPodSandbox for \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\" returns successfully" Sep 12 17:30:59.619579 systemd[1]: run-netns-cni\x2d4ac73dcf\x2d9f7a\x2db443\x2d18a8\x2d611da129ec86.mount: Deactivated successfully. Sep 12 17:30:59.690466 kubelet[2641]: I0912 17:30:59.690336 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-backend-key-pair\") pod \"81976e56-dd02-491a-b629-59ec2cab5a05\" (UID: \"81976e56-dd02-491a-b629-59ec2cab5a05\") " Sep 12 17:30:59.690466 kubelet[2641]: I0912 17:30:59.690395 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-ca-bundle\") pod \"81976e56-dd02-491a-b629-59ec2cab5a05\" (UID: \"81976e56-dd02-491a-b629-59ec2cab5a05\") " Sep 12 17:30:59.690466 kubelet[2641]: I0912 17:30:59.690422 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxs4\" (UniqueName: \"kubernetes.io/projected/81976e56-dd02-491a-b629-59ec2cab5a05-kube-api-access-fvxs4\") pod \"81976e56-dd02-491a-b629-59ec2cab5a05\" (UID: \"81976e56-dd02-491a-b629-59ec2cab5a05\") " Sep 12 17:30:59.696789 systemd[1]: var-lib-kubelet-pods-81976e56\x2ddd02\x2d491a\x2db629\x2d59ec2cab5a05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvxs4.mount: Deactivated successfully. Sep 12 17:30:59.697010 kubelet[2641]: I0912 17:30:59.696965 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81976e56-dd02-491a-b629-59ec2cab5a05-kube-api-access-fvxs4" (OuterVolumeSpecName: "kube-api-access-fvxs4") pod "81976e56-dd02-491a-b629-59ec2cab5a05" (UID: "81976e56-dd02-491a-b629-59ec2cab5a05"). InnerVolumeSpecName "kube-api-access-fvxs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:30:59.698424 kubelet[2641]: I0912 17:30:59.698392 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "81976e56-dd02-491a-b629-59ec2cab5a05" (UID: "81976e56-dd02-491a-b629-59ec2cab5a05"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:30:59.706951 kubelet[2641]: I0912 17:30:59.706908 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "81976e56-dd02-491a-b629-59ec2cab5a05" (UID: "81976e56-dd02-491a-b629-59ec2cab5a05"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:30:59.708506 systemd[1]: var-lib-kubelet-pods-81976e56\x2ddd02\x2d491a\x2db629\x2d59ec2cab5a05-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:30:59.790949 kubelet[2641]: I0912 17:30:59.790904 2641 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:30:59.790949 kubelet[2641]: I0912 17:30:59.790945 2641 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81976e56-dd02-491a-b629-59ec2cab5a05-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:30:59.790949 kubelet[2641]: I0912 17:30:59.790955 2641 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxs4\" (UniqueName: \"kubernetes.io/projected/81976e56-dd02-491a-b629-59ec2cab5a05-kube-api-access-fvxs4\") on node \"localhost\" DevicePath \"\"" Sep 12 17:31:00.600039 kubelet[2641]: I0912 17:31:00.599928 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3-whisker-ca-bundle\") pod \"whisker-76f78dfdf5-hv2p8\" (UID: \"5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3\") " pod="calico-system/whisker-76f78dfdf5-hv2p8" Sep 12 17:31:00.600039 kubelet[2641]: I0912 17:31:00.599977 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3-whisker-backend-key-pair\") pod \"whisker-76f78dfdf5-hv2p8\" (UID: \"5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3\") " pod="calico-system/whisker-76f78dfdf5-hv2p8" Sep 12 17:31:00.600039 kubelet[2641]: I0912 17:31:00.599995 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4qc\" (UniqueName: \"kubernetes.io/projected/5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3-kube-api-access-qs4qc\") pod \"whisker-76f78dfdf5-hv2p8\" (UID: \"5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3\") " pod="calico-system/whisker-76f78dfdf5-hv2p8" Sep 12 17:31:00.850965 containerd[1548]: time="2025-09-12T17:31:00.850554814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f78dfdf5-hv2p8,Uid:5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3,Namespace:calico-system,Attempt:0,}" Sep 12 17:31:00.928915 kernel: bpftool[4127]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:31:01.029693 systemd-networkd[1236]: caliece7e0bcfbf: Link UP Sep 12 17:31:01.030786 systemd-networkd[1236]: caliece7e0bcfbf: Gained carrier Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.931 [INFO][4085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0 whisker-76f78dfdf5- calico-system 5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3 887 0 2025-09-12 17:31:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76f78dfdf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76f78dfdf5-hv2p8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliece7e0bcfbf [] [] }} ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.931 [INFO][4085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.959 [INFO][4130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" HandleID="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Workload="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.960 [INFO][4130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" HandleID="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Workload="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76f78dfdf5-hv2p8", "timestamp":"2025-09-12 17:31:00.959349349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.960 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.960 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.960 [INFO][4130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.971 [INFO][4130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.987 [INFO][4130] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.994 [INFO][4130] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.997 [INFO][4130] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.999 [INFO][4130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:00.999 [INFO][4130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:01.002 [INFO][4130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:01.008 [INFO][4130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:01.020 [INFO][4130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:01.020 [INFO][4130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" host="localhost" Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:01.020 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:01.050295 containerd[1548]: 2025-09-12 17:31:01.020 [INFO][4130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" HandleID="k8s-pod-network.d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Workload="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" Sep 12 17:31:01.051260 containerd[1548]: 2025-09-12 17:31:01.022 [INFO][4085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0", GenerateName:"whisker-76f78dfdf5-", Namespace:"calico-system", SelfLink:"", UID:"5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76f78dfdf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76f78dfdf5-hv2p8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliece7e0bcfbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:01.051260 containerd[1548]: 2025-09-12 17:31:01.022 [INFO][4085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" Sep 12 17:31:01.051260 containerd[1548]: 2025-09-12 17:31:01.022 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliece7e0bcfbf ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" Sep 12 17:31:01.051260 containerd[1548]: 2025-09-12 17:31:01.031 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" Sep 12 17:31:01.051260 containerd[1548]: 2025-09-12 17:31:01.032 [INFO][4085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0", GenerateName:"whisker-76f78dfdf5-", Namespace:"calico-system", SelfLink:"", UID:"5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76f78dfdf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d", Pod:"whisker-76f78dfdf5-hv2p8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliece7e0bcfbf", MAC:"8e:85:7c:e7:b2:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:01.051260 containerd[1548]: 2025-09-12 17:31:01.044 [INFO][4085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d" Namespace="calico-system" Pod="whisker-76f78dfdf5-hv2p8" WorkloadEndpoint="localhost-k8s-whisker--76f78dfdf5--hv2p8-eth0" Sep 12 17:31:01.081442 containerd[1548]: time="2025-09-12T17:31:01.081264031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:01.081442 containerd[1548]: time="2025-09-12T17:31:01.081401788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:01.081442 containerd[1548]: time="2025-09-12T17:31:01.081433708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:01.082473 containerd[1548]: time="2025-09-12T17:31:01.081605344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:01.113330 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:01.122660 systemd-networkd[1236]: vxlan.calico: Link UP Sep 12 17:31:01.122667 systemd-networkd[1236]: vxlan.calico: Gained carrier Sep 12 17:31:01.156530 containerd[1548]: time="2025-09-12T17:31:01.156489744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f78dfdf5-hv2p8,Uid:5c819bfb-c0ad-4171-bdf7-f8072d6ff6e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d\"" Sep 12 17:31:01.161282 containerd[1548]: time="2025-09-12T17:31:01.161244090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:31:02.198628 containerd[1548]: time="2025-09-12T17:31:02.198569713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:02.200064 containerd[1548]: time="2025-09-12T17:31:02.199881968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:31:02.201824 containerd[1548]: time="2025-09-12T17:31:02.201761852Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:02.204641 containerd[1548]: time="2025-09-12T17:31:02.204597998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:02.205565 containerd[1548]: time="2025-09-12T17:31:02.205531820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.04424277s" Sep 12 17:31:02.205677 containerd[1548]: time="2025-09-12T17:31:02.205571219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:31:02.208233 containerd[1548]: time="2025-09-12T17:31:02.208196929Z" level=info msg="CreateContainer within sandbox \"d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:31:02.225205 containerd[1548]: time="2025-09-12T17:31:02.225140925Z" level=info msg="CreateContainer within sandbox \"d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"355c5ca9d6d46eee33d66772065fdf2e27553eb6c72bfc837911a4fe9b0483fc\"" Sep 12 17:31:02.225667 containerd[1548]: time="2025-09-12T17:31:02.225631836Z" level=info msg="StartContainer for \"355c5ca9d6d46eee33d66772065fdf2e27553eb6c72bfc837911a4fe9b0483fc\"" Sep 12 17:31:02.297349 containerd[1548]: time="2025-09-12T17:31:02.297242266Z" level=info msg="StartContainer for \"355c5ca9d6d46eee33d66772065fdf2e27553eb6c72bfc837911a4fe9b0483fc\" returns successfully" Sep 12 17:31:02.298019 kubelet[2641]: I0912 17:31:02.297971 2641 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81976e56-dd02-491a-b629-59ec2cab5a05" path="/var/lib/kubelet/pods/81976e56-dd02-491a-b629-59ec2cab5a05/volumes" Sep 12 17:31:02.299570 containerd[1548]: time="2025-09-12T17:31:02.299357186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:31:02.751105 systemd-networkd[1236]: caliece7e0bcfbf: Gained IPv6LL Sep 12 17:31:02.878938 systemd-networkd[1236]: vxlan.calico: Gained IPv6LL Sep 12 17:31:03.549638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170128313.mount: Deactivated successfully. Sep 12 17:31:03.587813 containerd[1548]: time="2025-09-12T17:31:03.587370907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:03.587813 containerd[1548]: time="2025-09-12T17:31:03.587413786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:31:03.604040 containerd[1548]: time="2025-09-12T17:31:03.603948520Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:03.608344 containerd[1548]: time="2025-09-12T17:31:03.608258160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:03.610007 containerd[1548]: time="2025-09-12T17:31:03.609945329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.310548344s" Sep 12 17:31:03.610007 containerd[1548]: time="2025-09-12T17:31:03.609990848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:31:03.653154 containerd[1548]: time="2025-09-12T17:31:03.653081690Z" level=info msg="CreateContainer within sandbox \"d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:31:03.674265 containerd[1548]: time="2025-09-12T17:31:03.674218578Z" level=info msg="CreateContainer within sandbox \"d29f7380f44232a108709a827a1b2ac8d3c1aeab769b9b291a589d00daf2821d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b6cd7935dde1bdc7059e24b78fbb46b3611c56529bb481272161e745de7c08b4\"" Sep 12 17:31:03.675375 containerd[1548]: time="2025-09-12T17:31:03.674887966Z" level=info msg="StartContainer for \"b6cd7935dde1bdc7059e24b78fbb46b3611c56529bb481272161e745de7c08b4\"" Sep 12 17:31:03.797043 containerd[1548]: time="2025-09-12T17:31:03.796987024Z" level=info msg="StartContainer for \"b6cd7935dde1bdc7059e24b78fbb46b3611c56529bb481272161e745de7c08b4\" returns successfully" Sep 12 17:31:04.473708 kubelet[2641]: I0912 17:31:04.473629 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76f78dfdf5-hv2p8" podStartSLOduration=2.023740183 podStartE2EDuration="4.473611559s" podCreationTimestamp="2025-09-12 17:31:00 +0000 UTC" firstStartedPulling="2025-09-12 17:31:01.16074686 +0000 UTC m=+32.993368331" lastFinishedPulling="2025-09-12 17:31:03.610618236 +0000 UTC m=+35.443239707" observedRunningTime="2025-09-12 17:31:04.473241685 +0000 UTC m=+36.305863156" watchObservedRunningTime="2025-09-12 17:31:04.473611559 +0000 UTC m=+36.306233030" Sep 12 17:31:07.285871 containerd[1548]: time="2025-09-12T17:31:07.285791156Z" level=info msg="StopPodSandbox for \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\"" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.329 [INFO][4371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.329 [INFO][4371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" iface="eth0" netns="/var/run/netns/cni-da851aa7-10ef-72ef-9890-1237ebcabde8" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.329 [INFO][4371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" iface="eth0" netns="/var/run/netns/cni-da851aa7-10ef-72ef-9890-1237ebcabde8" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.329 [INFO][4371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" iface="eth0" netns="/var/run/netns/cni-da851aa7-10ef-72ef-9890-1237ebcabde8" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.329 [INFO][4371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.329 [INFO][4371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.348 [INFO][4380] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.349 [INFO][4380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.349 [INFO][4380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.357 [WARNING][4380] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.357 [INFO][4380] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.359 [INFO][4380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:07.363060 containerd[1548]: 2025-09-12 17:31:07.361 [INFO][4371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:07.363443 containerd[1548]: time="2025-09-12T17:31:07.363206641Z" level=info msg="TearDown network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\" successfully" Sep 12 17:31:07.363443 containerd[1548]: time="2025-09-12T17:31:07.363232561Z" level=info msg="StopPodSandbox for \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\" returns successfully" Sep 12 17:31:07.365825 kubelet[2641]: E0912 17:31:07.365054 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:07.365433 systemd[1]: run-netns-cni\x2dda851aa7\x2d10ef\x2d72ef\x2d9890\x2d1237ebcabde8.mount: Deactivated successfully. Sep 12 17:31:07.366392 containerd[1548]: time="2025-09-12T17:31:07.365933596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmx7c,Uid:c7e50a73-6884-4846-84af-b99c62b21ac0,Namespace:kube-system,Attempt:1,}" Sep 12 17:31:07.494159 systemd-networkd[1236]: calie3fc8ae14d0: Link UP Sep 12 17:31:07.494939 systemd-networkd[1236]: calie3fc8ae14d0: Gained carrier Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.422 [INFO][4388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0 coredns-7c65d6cfc9- kube-system c7e50a73-6884-4846-84af-b99c62b21ac0 925 0 2025-09-12 17:30:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tmx7c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie3fc8ae14d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.422 [INFO][4388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.448 [INFO][4403] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" HandleID="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.448 [INFO][4403] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" HandleID="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tmx7c", "timestamp":"2025-09-12 17:31:07.448446478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.448 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.448 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.448 [INFO][4403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.457 [INFO][4403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.466 [INFO][4403] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.471 [INFO][4403] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.473 [INFO][4403] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.476 [INFO][4403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.477 [INFO][4403] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.478 [INFO][4403] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459 Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.482 [INFO][4403] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.489 [INFO][4403] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.489 [INFO][4403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" host="localhost" Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.489 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:07.513408 containerd[1548]: 2025-09-12 17:31:07.489 [INFO][4403] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" HandleID="k8s-pod-network.c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.514069 containerd[1548]: 2025-09-12 17:31:07.491 [INFO][4388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c7e50a73-6884-4846-84af-b99c62b21ac0", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tmx7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3fc8ae14d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:07.514069 containerd[1548]: 2025-09-12 17:31:07.491 [INFO][4388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.514069 containerd[1548]: 2025-09-12 17:31:07.491 [INFO][4388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3fc8ae14d0 ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.514069 containerd[1548]: 2025-09-12 17:31:07.495 [INFO][4388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.514069 containerd[1548]: 2025-09-12 17:31:07.495 [INFO][4388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c7e50a73-6884-4846-84af-b99c62b21ac0", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459", Pod:"coredns-7c65d6cfc9-tmx7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3fc8ae14d0", MAC:"62:29:f9:11:3c:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:07.514069 containerd[1548]: 2025-09-12 17:31:07.511 [INFO][4388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tmx7c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:07.530611 containerd[1548]: time="2025-09-12T17:31:07.530519166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:07.530749 containerd[1548]: time="2025-09-12T17:31:07.530623884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:07.530749 containerd[1548]: time="2025-09-12T17:31:07.530651004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:07.530808 containerd[1548]: time="2025-09-12T17:31:07.530774842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:07.561250 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:07.584739 containerd[1548]: time="2025-09-12T17:31:07.584676234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmx7c,Uid:c7e50a73-6884-4846-84af-b99c62b21ac0,Namespace:kube-system,Attempt:1,} returns sandbox id \"c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459\"" Sep 12 17:31:07.585587 kubelet[2641]: E0912 17:31:07.585564 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:07.587983 containerd[1548]: time="2025-09-12T17:31:07.587952100Z" level=info msg="CreateContainer within sandbox \"c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:31:07.611536 containerd[1548]: time="2025-09-12T17:31:07.611447873Z" level=info msg="CreateContainer within sandbox \"c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d055d3edff4c8182222aa6c3d5ccb4c2132ded9c5945b8712b763e5aef77df5\"" Sep 12 17:31:07.612271 containerd[1548]: time="2025-09-12T17:31:07.612245140Z" level=info msg="StartContainer for \"4d055d3edff4c8182222aa6c3d5ccb4c2132ded9c5945b8712b763e5aef77df5\"" Sep 12 17:31:07.659765 containerd[1548]: time="2025-09-12T17:31:07.659713598Z" level=info msg="StartContainer for \"4d055d3edff4c8182222aa6c3d5ccb4c2132ded9c5945b8712b763e5aef77df5\" returns successfully" Sep 12 17:31:08.285954 containerd[1548]: time="2025-09-12T17:31:08.285906530Z" level=info msg="StopPodSandbox for \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\"" Sep 12 17:31:08.286337 containerd[1548]: time="2025-09-12T17:31:08.286113367Z" level=info msg="StopPodSandbox for \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\"" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.338 [INFO][4522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.338 [INFO][4522] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" iface="eth0" netns="/var/run/netns/cni-fdde5213-8c67-82db-f393-d261bbe32c45" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.338 [INFO][4522] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" iface="eth0" netns="/var/run/netns/cni-fdde5213-8c67-82db-f393-d261bbe32c45" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.338 [INFO][4522] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" iface="eth0" netns="/var/run/netns/cni-fdde5213-8c67-82db-f393-d261bbe32c45" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.338 [INFO][4522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.338 [INFO][4522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.362 [INFO][4540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.362 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.362 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.371 [WARNING][4540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.371 [INFO][4540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.372 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:08.377240 containerd[1548]: 2025-09-12 17:31:08.374 [INFO][4522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:08.377682 containerd[1548]: time="2025-09-12T17:31:08.377422023Z" level=info msg="TearDown network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\" successfully" Sep 12 17:31:08.377682 containerd[1548]: time="2025-09-12T17:31:08.377572221Z" level=info msg="StopPodSandbox for \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\" returns successfully" Sep 12 17:31:08.379649 systemd[1]: run-netns-cni\x2dfdde5213\x2d8c67\x2d82db\x2df393\x2dd261bbe32c45.mount: Deactivated successfully. Sep 12 17:31:08.381051 containerd[1548]: time="2025-09-12T17:31:08.381016206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2lbd,Uid:09e6a9f7-4303-4b5f-ad99-a3e9b65f6620,Namespace:calico-system,Attempt:1,}" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.344 [INFO][4523] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.345 [INFO][4523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" iface="eth0" netns="/var/run/netns/cni-4e0143bd-8f14-cff2-f62e-5d17449b6d2e" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.345 [INFO][4523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" iface="eth0" netns="/var/run/netns/cni-4e0143bd-8f14-cff2-f62e-5d17449b6d2e" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.347 [INFO][4523] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" iface="eth0" netns="/var/run/netns/cni-4e0143bd-8f14-cff2-f62e-5d17449b6d2e" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.347 [INFO][4523] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.347 [INFO][4523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.379 [INFO][4547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.379 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.379 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.390 [WARNING][4547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.390 [INFO][4547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.392 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:08.397182 containerd[1548]: 2025-09-12 17:31:08.394 [INFO][4523] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:08.397572 containerd[1548]: time="2025-09-12T17:31:08.397366264Z" level=info msg="TearDown network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\" successfully" Sep 12 17:31:08.397572 containerd[1548]: time="2025-09-12T17:31:08.397393423Z" level=info msg="StopPodSandbox for \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\" returns successfully" Sep 12 17:31:08.398996 containerd[1548]: time="2025-09-12T17:31:08.398954958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-7szgf,Uid:824c5a12-bcb9-44ed-a3d8-24c299fba85d,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:31:08.400890 systemd[1]: run-netns-cni\x2d4e0143bd\x2d8f14\x2dcff2\x2df62e\x2d5d17449b6d2e.mount: Deactivated successfully. Sep 12 17:31:08.475105 kubelet[2641]: E0912 17:31:08.474974 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:08.507740 kubelet[2641]: I0912 17:31:08.507649 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tmx7c" podStartSLOduration=34.507619736 podStartE2EDuration="34.507619736s" podCreationTimestamp="2025-09-12 17:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:08.491446435 +0000 UTC m=+40.324067906" watchObservedRunningTime="2025-09-12 17:31:08.507619736 +0000 UTC m=+40.340241207" Sep 12 17:31:08.538392 systemd-networkd[1236]: cali92a09bf834b: Link UP Sep 12 17:31:08.538764 systemd-networkd[1236]: cali92a09bf834b: Gained carrier Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.443 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t2lbd-eth0 csi-node-driver- calico-system 09e6a9f7-4303-4b5f-ad99-a3e9b65f6620 937 0 2025-09-12 17:30:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t2lbd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92a09bf834b [] [] }} ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.443 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.478 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" HandleID="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.478 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" HandleID="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005221a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t2lbd", "timestamp":"2025-09-12 17:31:08.478307246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.478 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.478 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.478 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.489 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.497 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.503 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.510 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.515 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.515 [INFO][4586] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.519 [INFO][4586] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.525 [INFO][4586] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.531 [INFO][4586] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.531 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" host="localhost" Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.531 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:08.558226 containerd[1548]: 2025-09-12 17:31:08.531 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" HandleID="k8s-pod-network.dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.559037 containerd[1548]: 2025-09-12 17:31:08.534 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t2lbd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t2lbd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92a09bf834b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:08.559037 containerd[1548]: 2025-09-12 17:31:08.534 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.559037 containerd[1548]: 2025-09-12 17:31:08.534 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92a09bf834b ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.559037 containerd[1548]: 2025-09-12 17:31:08.538 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.559037 containerd[1548]: 2025-09-12 17:31:08.538 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t2lbd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c", Pod:"csi-node-driver-t2lbd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92a09bf834b", MAC:"c2:70:c5:0d:93:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:08.559037 containerd[1548]: 2025-09-12 17:31:08.552 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c" Namespace="calico-system" Pod="csi-node-driver-t2lbd" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:08.574684 containerd[1548]: time="2025-09-12T17:31:08.574554463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:08.575011 containerd[1548]: time="2025-09-12T17:31:08.574933937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:08.575154 containerd[1548]: time="2025-09-12T17:31:08.575000656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:08.575335 containerd[1548]: time="2025-09-12T17:31:08.575308051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:08.611766 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:08.627236 containerd[1548]: time="2025-09-12T17:31:08.627160059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2lbd,Uid:09e6a9f7-4303-4b5f-ad99-a3e9b65f6620,Namespace:calico-system,Attempt:1,} returns sandbox id \"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c\"" Sep 12 17:31:08.629224 containerd[1548]: time="2025-09-12T17:31:08.629184547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:31:08.634790 systemd-networkd[1236]: calid06122c5ae6: Link UP Sep 12 17:31:08.635385 systemd-networkd[1236]: calid06122c5ae6: Gained carrier Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.463 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0 calico-apiserver-75f496c6fb- calico-apiserver 824c5a12-bcb9-44ed-a3d8-24c299fba85d 938 0 2025-09-12 17:30:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75f496c6fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75f496c6fb-7szgf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid06122c5ae6 [] [] }} ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.463 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.504 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" HandleID="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.505 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" HandleID="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-75f496c6fb-7szgf", "timestamp":"2025-09-12 17:31:08.504977218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.505 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.534 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.535 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.591 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.600 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.605 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.608 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.611 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.611 [INFO][4593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.614 [INFO][4593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2 Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.619 [INFO][4593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.626 [INFO][4593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.626 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" host="localhost" Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.626 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:08.653074 containerd[1548]: 2025-09-12 17:31:08.626 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" HandleID="k8s-pod-network.8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.653713 containerd[1548]: 2025-09-12 17:31:08.631 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"824c5a12-bcb9-44ed-a3d8-24c299fba85d", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75f496c6fb-7szgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid06122c5ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:08.653713 containerd[1548]: 2025-09-12 17:31:08.631 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.653713 containerd[1548]: 2025-09-12 17:31:08.631 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid06122c5ae6 ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.653713 containerd[1548]: 2025-09-12 17:31:08.635 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.653713 containerd[1548]: 2025-09-12 17:31:08.636 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"824c5a12-bcb9-44ed-a3d8-24c299fba85d", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2", Pod:"calico-apiserver-75f496c6fb-7szgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid06122c5ae6", MAC:"0a:cf:bd:a0:0e:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:08.653713 containerd[1548]: 2025-09-12 17:31:08.650 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-7szgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:08.676745 containerd[1548]: time="2025-09-12T17:31:08.676143354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:08.676745 containerd[1548]: time="2025-09-12T17:31:08.676578547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:08.676745 containerd[1548]: time="2025-09-12T17:31:08.676591827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:08.676745 containerd[1548]: time="2025-09-12T17:31:08.676702985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:08.700768 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:08.731485 containerd[1548]: time="2025-09-12T17:31:08.731444628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-7szgf,Uid:824c5a12-bcb9-44ed-a3d8-24c299fba85d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2\"" Sep 12 17:31:09.285443 containerd[1548]: time="2025-09-12T17:31:09.285337704Z" level=info msg="StopPodSandbox for \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\"" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.342 [INFO][4716] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.343 [INFO][4716] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" iface="eth0" netns="/var/run/netns/cni-360208f8-e025-3edd-ce6a-12d294f49a48" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.343 [INFO][4716] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" iface="eth0" netns="/var/run/netns/cni-360208f8-e025-3edd-ce6a-12d294f49a48" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.343 [INFO][4716] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" iface="eth0" netns="/var/run/netns/cni-360208f8-e025-3edd-ce6a-12d294f49a48" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.343 [INFO][4716] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.343 [INFO][4716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.363 [INFO][4725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.363 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.363 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.372 [WARNING][4725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.372 [INFO][4725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.374 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:09.378878 containerd[1548]: 2025-09-12 17:31:09.376 [INFO][4716] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:09.379504 containerd[1548]: time="2025-09-12T17:31:09.379072479Z" level=info msg="TearDown network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\" successfully" Sep 12 17:31:09.379504 containerd[1548]: time="2025-09-12T17:31:09.379107439Z" level=info msg="StopPodSandbox for \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\" returns successfully" Sep 12 17:31:09.380555 containerd[1548]: time="2025-09-12T17:31:09.380153902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-6462h,Uid:e9d7de93-a50e-470c-992a-6e1a6cde9578,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:31:09.381443 systemd[1]: run-netns-cni\x2d360208f8\x2de025\x2d3edd\x2dce6a\x2d12d294f49a48.mount: Deactivated successfully. Sep 12 17:31:09.481930 kubelet[2641]: E0912 17:31:09.481859 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:09.536148 systemd-networkd[1236]: calie3fc8ae14d0: Gained IPv6LL Sep 12 17:31:09.549451 systemd-networkd[1236]: cali87e24853e29: Link UP Sep 12 17:31:09.550887 systemd-networkd[1236]: cali87e24853e29: Gained carrier Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.452 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0 calico-apiserver-75f496c6fb- calico-apiserver e9d7de93-a50e-470c-992a-6e1a6cde9578 959 0 2025-09-12 17:30:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75f496c6fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75f496c6fb-6462h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87e24853e29 [] [] }} ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.452 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.491 [INFO][4750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" HandleID="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.491 [INFO][4750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" HandleID="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001376f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-75f496c6fb-6462h", "timestamp":"2025-09-12 17:31:09.491320086 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.491 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.491 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.491 [INFO][4750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.505 [INFO][4750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.512 [INFO][4750] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.518 [INFO][4750] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.521 [INFO][4750] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.524 [INFO][4750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.524 [INFO][4750] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.528 [INFO][4750] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0 Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.534 [INFO][4750] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.543 [INFO][4750] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.543 [INFO][4750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" host="localhost" Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.543 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:09.607677 containerd[1548]: 2025-09-12 17:31:09.543 [INFO][4750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" HandleID="k8s-pod-network.6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.608414 containerd[1548]: 2025-09-12 17:31:09.546 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9d7de93-a50e-470c-992a-6e1a6cde9578", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75f496c6fb-6462h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87e24853e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:09.608414 containerd[1548]: 2025-09-12 17:31:09.546 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.608414 containerd[1548]: 2025-09-12 17:31:09.546 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87e24853e29 ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.608414 containerd[1548]: 2025-09-12 17:31:09.551 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.608414 containerd[1548]: 2025-09-12 17:31:09.552 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9d7de93-a50e-470c-992a-6e1a6cde9578", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0", Pod:"calico-apiserver-75f496c6fb-6462h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87e24853e29", MAC:"62:e7:3a:5b:34:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:09.608414 containerd[1548]: 2025-09-12 17:31:09.596 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0" Namespace="calico-apiserver" Pod="calico-apiserver-75f496c6fb-6462h" WorkloadEndpoint="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:09.632339 containerd[1548]: time="2025-09-12T17:31:09.631309578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.632339 containerd[1548]: time="2025-09-12T17:31:09.632158925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:31:09.634157 containerd[1548]: time="2025-09-12T17:31:09.632644598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:09.634157 containerd[1548]: time="2025-09-12T17:31:09.632701517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:09.634157 containerd[1548]: time="2025-09-12T17:31:09.632712637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:09.634157 containerd[1548]: time="2025-09-12T17:31:09.632821675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:09.638894 containerd[1548]: time="2025-09-12T17:31:09.638847381Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.641378 containerd[1548]: time="2025-09-12T17:31:09.641337262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.642194 containerd[1548]: time="2025-09-12T17:31:09.642152569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.012922423s" Sep 12 17:31:09.642194 containerd[1548]: time="2025-09-12T17:31:09.642189888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:31:09.643737 containerd[1548]: time="2025-09-12T17:31:09.643547987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:31:09.644873 containerd[1548]: time="2025-09-12T17:31:09.644844287Z" level=info msg="CreateContainer within sandbox \"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:31:09.663656 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:09.672264 containerd[1548]: time="2025-09-12T17:31:09.672202380Z" level=info msg="CreateContainer within sandbox \"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ad020fa59c687b429a4f0e0e2e01cf302a3ccbec230c1c34d84312c74f1234b7\"" Sep 12 17:31:09.672797 containerd[1548]: time="2025-09-12T17:31:09.672767411Z" level=info msg="StartContainer for \"ad020fa59c687b429a4f0e0e2e01cf302a3ccbec230c1c34d84312c74f1234b7\"" Sep 12 17:31:09.685824 containerd[1548]: time="2025-09-12T17:31:09.685721888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f496c6fb-6462h,Uid:e9d7de93-a50e-470c-992a-6e1a6cde9578,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0\"" Sep 12 17:31:09.727236 containerd[1548]: time="2025-09-12T17:31:09.727189760Z" level=info msg="StartContainer for \"ad020fa59c687b429a4f0e0e2e01cf302a3ccbec230c1c34d84312c74f1234b7\" returns successfully" Sep 12 17:31:10.285304 containerd[1548]: time="2025-09-12T17:31:10.285193111Z" level=info msg="StopPodSandbox for \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\"" Sep 12 17:31:10.367543 systemd-networkd[1236]: cali92a09bf834b: Gained IPv6LL Sep 12 17:31:10.367873 systemd-networkd[1236]: calid06122c5ae6: Gained IPv6LL Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.332 [INFO][4853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.332 [INFO][4853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" iface="eth0" netns="/var/run/netns/cni-748e986d-7265-610a-8486-fd0d353b3902" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.333 [INFO][4853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" iface="eth0" netns="/var/run/netns/cni-748e986d-7265-610a-8486-fd0d353b3902" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.333 [INFO][4853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" iface="eth0" netns="/var/run/netns/cni-748e986d-7265-610a-8486-fd0d353b3902" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.333 [INFO][4853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.333 [INFO][4853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.355 [INFO][4862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.355 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.355 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.370 [WARNING][4862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.371 [INFO][4862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.374 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:10.378946 containerd[1548]: 2025-09-12 17:31:10.376 [INFO][4853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:10.380571 containerd[1548]: time="2025-09-12T17:31:10.379142480Z" level=info msg="TearDown network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\" successfully" Sep 12 17:31:10.380571 containerd[1548]: time="2025-09-12T17:31:10.379170279Z" level=info msg="StopPodSandbox for \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\" returns successfully" Sep 12 17:31:10.380739 containerd[1548]: time="2025-09-12T17:31:10.380622497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gz6qp,Uid:d33a2b08-c505-4cce-a314-e4b791e0c009,Namespace:calico-system,Attempt:1,}" Sep 12 17:31:10.383531 systemd[1]: run-netns-cni\x2d748e986d\x2d7265\x2d610a\x2d8486\x2dfd0d353b3902.mount: Deactivated successfully. Sep 12 17:31:10.488835 kubelet[2641]: E0912 17:31:10.488410 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:10.513590 systemd-networkd[1236]: calib45b26671a2: Link UP Sep 12 17:31:10.513843 systemd-networkd[1236]: calib45b26671a2: Gained carrier Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.436 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--gz6qp-eth0 goldmane-7988f88666- calico-system d33a2b08-c505-4cce-a314-e4b791e0c009 972 0 2025-09-12 17:30:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-gz6qp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib45b26671a2 [] [] }} ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.437 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.460 [INFO][4886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" HandleID="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.460 [INFO][4886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" HandleID="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-gz6qp", "timestamp":"2025-09-12 17:31:10.460534279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.460 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.460 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.460 [INFO][4886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.470 [INFO][4886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.475 [INFO][4886] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.480 [INFO][4886] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.483 [INFO][4886] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.486 [INFO][4886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.486 [INFO][4886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.490 [INFO][4886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2 Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.497 [INFO][4886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.507 [INFO][4886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.507 [INFO][4886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" host="localhost" Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.507 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:10.563498 containerd[1548]: 2025-09-12 17:31:10.507 [INFO][4886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" HandleID="k8s-pod-network.65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.564042 containerd[1548]: 2025-09-12 17:31:10.510 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gz6qp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d33a2b08-c505-4cce-a314-e4b791e0c009", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-gz6qp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45b26671a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:10.564042 containerd[1548]: 2025-09-12 17:31:10.510 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.564042 containerd[1548]: 2025-09-12 17:31:10.510 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib45b26671a2 ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.564042 containerd[1548]: 2025-09-12 17:31:10.515 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.564042 containerd[1548]: 2025-09-12 17:31:10.519 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gz6qp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d33a2b08-c505-4cce-a314-e4b791e0c009", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2", Pod:"goldmane-7988f88666-gz6qp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45b26671a2", MAC:"02:89:d8:69:f0:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:10.564042 containerd[1548]: 2025-09-12 17:31:10.545 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2" Namespace="calico-system" Pod="goldmane-7988f88666-gz6qp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:10.623428 containerd[1548]: time="2025-09-12T17:31:10.623262279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:10.623428 containerd[1548]: time="2025-09-12T17:31:10.623321518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:10.623428 containerd[1548]: time="2025-09-12T17:31:10.623332838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:10.624022 containerd[1548]: time="2025-09-12T17:31:10.623911789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:10.651774 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:10.670430 containerd[1548]: time="2025-09-12T17:31:10.670389841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gz6qp,Uid:d33a2b08-c505-4cce-a314-e4b791e0c009,Namespace:calico-system,Attempt:1,} returns sandbox id \"65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2\"" Sep 12 17:31:10.879209 systemd-networkd[1236]: cali87e24853e29: Gained IPv6LL Sep 12 17:31:11.101535 containerd[1548]: time="2025-09-12T17:31:11.100967275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:11.101723 containerd[1548]: time="2025-09-12T17:31:11.101689984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:31:11.102694 containerd[1548]: time="2025-09-12T17:31:11.102664689Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:11.105027 containerd[1548]: time="2025-09-12T17:31:11.104988855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:11.105825 containerd[1548]: time="2025-09-12T17:31:11.105780243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.462186616s" Sep 12 17:31:11.105881 containerd[1548]: time="2025-09-12T17:31:11.105832522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:31:11.107283 containerd[1548]: time="2025-09-12T17:31:11.107070424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:31:11.109055 containerd[1548]: time="2025-09-12T17:31:11.108898317Z" level=info msg="CreateContainer within sandbox \"8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:31:11.123686 containerd[1548]: time="2025-09-12T17:31:11.123634377Z" level=info msg="CreateContainer within sandbox \"8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ebf8c9a0b32773e2c47eabffdcbd5f04649be7ce8eede3c0e8a35ab1d84ee411\"" Sep 12 17:31:11.125416 containerd[1548]: time="2025-09-12T17:31:11.124235568Z" level=info msg="StartContainer for \"ebf8c9a0b32773e2c47eabffdcbd5f04649be7ce8eede3c0e8a35ab1d84ee411\"" Sep 12 17:31:11.191206 containerd[1548]: time="2025-09-12T17:31:11.191033294Z" level=info msg="StartContainer for \"ebf8c9a0b32773e2c47eabffdcbd5f04649be7ce8eede3c0e8a35ab1d84ee411\" returns successfully" Sep 12 17:31:11.285047 containerd[1548]: time="2025-09-12T17:31:11.284993656Z" level=info msg="StopPodSandbox for \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\"" Sep 12 17:31:11.285335 containerd[1548]: time="2025-09-12T17:31:11.285312331Z" level=info msg="StopPodSandbox for \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\"" Sep 12 17:31:11.395630 containerd[1548]: time="2025-09-12T17:31:11.393340204Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:11.395630 containerd[1548]: time="2025-09-12T17:31:11.394011594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:31:11.397489 containerd[1548]: time="2025-09-12T17:31:11.397446903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 290.32388ms" Sep 12 17:31:11.397677 containerd[1548]: time="2025-09-12T17:31:11.397656379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:31:11.399565 containerd[1548]: time="2025-09-12T17:31:11.399536152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:31:11.406125 containerd[1548]: time="2025-09-12T17:31:11.406018295Z" level=info msg="CreateContainer within sandbox \"6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.346 [INFO][5013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.346 [INFO][5013] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" iface="eth0" netns="/var/run/netns/cni-7cd7d0a7-a5f6-2184-ec4e-8292e34b63fb" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.347 [INFO][5013] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" iface="eth0" netns="/var/run/netns/cni-7cd7d0a7-a5f6-2184-ec4e-8292e34b63fb" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.347 [INFO][5013] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" iface="eth0" netns="/var/run/netns/cni-7cd7d0a7-a5f6-2184-ec4e-8292e34b63fb" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.347 [INFO][5013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.347 [INFO][5013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.389 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.389 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.389 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.401 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.401 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.405 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:11.417398 containerd[1548]: 2025-09-12 17:31:11.413 [INFO][5013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:11.419221 containerd[1548]: time="2025-09-12T17:31:11.418181114Z" level=info msg="TearDown network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\" successfully" Sep 12 17:31:11.419221 containerd[1548]: time="2025-09-12T17:31:11.418221233Z" level=info msg="StopPodSandbox for \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\" returns successfully" Sep 12 17:31:11.420392 systemd[1]: run-netns-cni\x2d7cd7d0a7\x2da5f6\x2d2184\x2dec4e\x2d8292e34b63fb.mount: Deactivated successfully. Sep 12 17:31:11.422379 containerd[1548]: time="2025-09-12T17:31:11.422341932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d9df5885-5z5xb,Uid:89f3f547-52ae-4646-86ac-31102c426a8a,Namespace:calico-system,Attempt:1,}" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.358 [INFO][5012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.359 [INFO][5012] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" iface="eth0" netns="/var/run/netns/cni-d9b4e0d8-ab0d-8a16-548a-99d481b6faeb" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.359 [INFO][5012] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" iface="eth0" netns="/var/run/netns/cni-d9b4e0d8-ab0d-8a16-548a-99d481b6faeb" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.359 [INFO][5012] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" iface="eth0" netns="/var/run/netns/cni-d9b4e0d8-ab0d-8a16-548a-99d481b6faeb" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.359 [INFO][5012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.359 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.391 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.391 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.405 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.415 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.416 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.419 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:11.426006 containerd[1548]: 2025-09-12 17:31:11.423 [INFO][5012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:11.426785 containerd[1548]: time="2025-09-12T17:31:11.426738347Z" level=info msg="TearDown network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\" successfully" Sep 12 17:31:11.426987 containerd[1548]: time="2025-09-12T17:31:11.426969343Z" level=info msg="StopPodSandbox for \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\" returns successfully" Sep 12 17:31:11.428548 kubelet[2641]: E0912 17:31:11.428281 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:11.429315 systemd[1]: run-netns-cni\x2dd9b4e0d8\x2dab0d\x2d8a16\x2d548a\x2d99d481b6faeb.mount: Deactivated successfully. Sep 12 17:31:11.431047 containerd[1548]: time="2025-09-12T17:31:11.430070617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58qjr,Uid:1786da27-c74b-428e-9360-4f44ff994f41,Namespace:kube-system,Attempt:1,}" Sep 12 17:31:11.459675 containerd[1548]: time="2025-09-12T17:31:11.459603338Z" level=info msg="CreateContainer within sandbox \"6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a6ae4de6c711c2152631f0d8af8a270c2024f213df7be3017231933ccddc662a\"" Sep 12 17:31:11.462468 containerd[1548]: time="2025-09-12T17:31:11.462427376Z" level=info msg="StartContainer for \"a6ae4de6c711c2152631f0d8af8a270c2024f213df7be3017231933ccddc662a\"" Sep 12 17:31:11.513286 kubelet[2641]: I0912 17:31:11.512985 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75f496c6fb-7szgf" podStartSLOduration=24.138958202 podStartE2EDuration="26.512965384s" podCreationTimestamp="2025-09-12 17:30:45 +0000 UTC" firstStartedPulling="2025-09-12 17:31:08.732846685 +0000 UTC m=+40.565468156" lastFinishedPulling="2025-09-12 17:31:11.106853867 +0000 UTC m=+42.939475338" observedRunningTime="2025-09-12 17:31:11.51253519 +0000 UTC m=+43.345156661" watchObservedRunningTime="2025-09-12 17:31:11.512965384 +0000 UTC m=+43.345586815" Sep 12 17:31:11.612784 containerd[1548]: time="2025-09-12T17:31:11.612710499Z" level=info msg="StartContainer for \"a6ae4de6c711c2152631f0d8af8a270c2024f213df7be3017231933ccddc662a\" returns successfully" Sep 12 17:31:11.638727 systemd-networkd[1236]: calie404673a48d: Link UP Sep 12 17:31:11.639142 systemd-networkd[1236]: calie404673a48d: Gained carrier Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.535 [INFO][5060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0 calico-kube-controllers-64d9df5885- calico-system 89f3f547-52ae-4646-86ac-31102c426a8a 985 0 2025-09-12 17:30:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64d9df5885 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64d9df5885-5z5xb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie404673a48d [] [] }} ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.535 [INFO][5060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.576 [INFO][5108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" HandleID="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.576 [INFO][5108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" HandleID="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001367b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64d9df5885-5z5xb", "timestamp":"2025-09-12 17:31:11.576700795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.577 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.577 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.578 [INFO][5108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.590 [INFO][5108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.595 [INFO][5108] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.602 [INFO][5108] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.605 [INFO][5108] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.607 [INFO][5108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.608 [INFO][5108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.610 [INFO][5108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590 Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.617 [INFO][5108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.628 [INFO][5108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.628 [INFO][5108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" host="localhost" Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.628 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:11.660497 containerd[1548]: 2025-09-12 17:31:11.628 [INFO][5108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" HandleID="k8s-pod-network.33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.663006 containerd[1548]: 2025-09-12 17:31:11.632 [INFO][5060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0", GenerateName:"calico-kube-controllers-64d9df5885-", Namespace:"calico-system", SelfLink:"", UID:"89f3f547-52ae-4646-86ac-31102c426a8a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d9df5885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64d9df5885-5z5xb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie404673a48d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:11.663006 containerd[1548]: 2025-09-12 17:31:11.632 [INFO][5060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.663006 containerd[1548]: 2025-09-12 17:31:11.632 [INFO][5060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie404673a48d ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.663006 containerd[1548]: 2025-09-12 17:31:11.639 [INFO][5060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.663006 containerd[1548]: 2025-09-12 17:31:11.641 [INFO][5060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0", GenerateName:"calico-kube-controllers-64d9df5885-", Namespace:"calico-system", SelfLink:"", UID:"89f3f547-52ae-4646-86ac-31102c426a8a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d9df5885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590", Pod:"calico-kube-controllers-64d9df5885-5z5xb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie404673a48d", MAC:"fa:dd:e0:31:83:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:11.663006 containerd[1548]: 2025-09-12 17:31:11.655 [INFO][5060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590" Namespace="calico-system" Pod="calico-kube-controllers-64d9df5885-5z5xb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:11.686336 containerd[1548]: time="2025-09-12T17:31:11.685936290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:11.686336 containerd[1548]: time="2025-09-12T17:31:11.685993929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:11.686336 containerd[1548]: time="2025-09-12T17:31:11.686005808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:11.686336 containerd[1548]: time="2025-09-12T17:31:11.686095087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:11.731827 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:11.753302 systemd-networkd[1236]: cali48e1ad897c0: Link UP Sep 12 17:31:11.753562 systemd-networkd[1236]: cali48e1ad897c0: Gained carrier Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.534 [INFO][5064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0 coredns-7c65d6cfc9- kube-system 1786da27-c74b-428e-9360-4f44ff994f41 986 0 2025-09-12 17:30:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-58qjr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48e1ad897c0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.535 [INFO][5064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.578 [INFO][5110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" HandleID="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.578 [INFO][5110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" HandleID="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-58qjr", "timestamp":"2025-09-12 17:31:11.578625446 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.578 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.628 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.628 [INFO][5110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.690 [INFO][5110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.699 [INFO][5110] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.708 [INFO][5110] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.712 [INFO][5110] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.717 [INFO][5110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.718 [INFO][5110] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.721 [INFO][5110] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.728 [INFO][5110] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.741 [INFO][5110] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.741 [INFO][5110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" host="localhost" Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.743 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:11.775737 containerd[1548]: 2025-09-12 17:31:11.743 [INFO][5110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" HandleID="k8s-pod-network.6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.776385 containerd[1548]: 2025-09-12 17:31:11.751 [INFO][5064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1786da27-c74b-428e-9360-4f44ff994f41", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-58qjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48e1ad897c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:11.776385 containerd[1548]: 2025-09-12 17:31:11.751 [INFO][5064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.776385 containerd[1548]: 2025-09-12 17:31:11.751 [INFO][5064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48e1ad897c0 ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.776385 containerd[1548]: 2025-09-12 17:31:11.755 [INFO][5064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.776385 containerd[1548]: 2025-09-12 17:31:11.755 [INFO][5064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1786da27-c74b-428e-9360-4f44ff994f41", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e", Pod:"coredns-7c65d6cfc9-58qjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48e1ad897c0", MAC:"62:24:c7:6d:3c:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:11.776385 containerd[1548]: 2025-09-12 17:31:11.769 [INFO][5064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58qjr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:11.780869 containerd[1548]: time="2025-09-12T17:31:11.780826797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d9df5885-5z5xb,Uid:89f3f547-52ae-4646-86ac-31102c426a8a,Namespace:calico-system,Attempt:1,} returns sandbox id \"33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590\"" Sep 12 17:31:11.805987 containerd[1548]: time="2025-09-12T17:31:11.805605829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:11.805987 containerd[1548]: time="2025-09-12T17:31:11.805680548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:11.805987 containerd[1548]: time="2025-09-12T17:31:11.805695907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:11.805987 containerd[1548]: time="2025-09-12T17:31:11.805797106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:11.835163 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:11.865582 containerd[1548]: time="2025-09-12T17:31:11.865492977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58qjr,Uid:1786da27-c74b-428e-9360-4f44ff994f41,Namespace:kube-system,Attempt:1,} returns sandbox id \"6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e\"" Sep 12 17:31:11.866600 kubelet[2641]: E0912 17:31:11.866505 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:11.871688 containerd[1548]: time="2025-09-12T17:31:11.871645686Z" level=info msg="CreateContainer within sandbox \"6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:31:11.889029 containerd[1548]: time="2025-09-12T17:31:11.888438316Z" level=info msg="CreateContainer within sandbox \"6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"415e496fbde91cf1d0de0d81ee158ce3df5ac117d931158208f6f0097d494a18\"" Sep 12 17:31:11.889789 containerd[1548]: time="2025-09-12T17:31:11.889759536Z" level=info msg="StartContainer for \"415e496fbde91cf1d0de0d81ee158ce3df5ac117d931158208f6f0097d494a18\"" Sep 12 17:31:11.947997 containerd[1548]: time="2025-09-12T17:31:11.947905111Z" level=info msg="StartContainer for \"415e496fbde91cf1d0de0d81ee158ce3df5ac117d931158208f6f0097d494a18\" returns successfully" Sep 12 17:31:12.159105 systemd-networkd[1236]: calib45b26671a2: Gained IPv6LL Sep 12 17:31:12.517764 kubelet[2641]: I0912 17:31:12.517726 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:31:12.519842 kubelet[2641]: E0912 17:31:12.518440 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:12.536042 kubelet[2641]: I0912 17:31:12.535875 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75f496c6fb-6462h" podStartSLOduration=25.824744837 podStartE2EDuration="27.535852862s" podCreationTimestamp="2025-09-12 17:30:45 +0000 UTC" firstStartedPulling="2025-09-12 17:31:09.687972253 +0000 UTC m=+41.520593724" lastFinishedPulling="2025-09-12 17:31:11.399080278 +0000 UTC m=+43.231701749" observedRunningTime="2025-09-12 17:31:12.530725776 +0000 UTC m=+44.363347247" watchObservedRunningTime="2025-09-12 17:31:12.535852862 +0000 UTC m=+44.368474333" Sep 12 17:31:12.899755 containerd[1548]: time="2025-09-12T17:31:12.899618851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:12.903766 containerd[1548]: time="2025-09-12T17:31:12.903675472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:31:12.911894 containerd[1548]: time="2025-09-12T17:31:12.911777194Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:12.927427 containerd[1548]: time="2025-09-12T17:31:12.927370007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:12.928507 containerd[1548]: time="2025-09-12T17:31:12.928467551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.528744683s" Sep 12 17:31:12.928507 containerd[1548]: time="2025-09-12T17:31:12.928509991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:31:12.930716 containerd[1548]: time="2025-09-12T17:31:12.930682999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:31:12.931583 containerd[1548]: time="2025-09-12T17:31:12.931533427Z" level=info msg="CreateContainer within sandbox \"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:31:13.022843 containerd[1548]: time="2025-09-12T17:31:13.022775467Z" level=info msg="CreateContainer within sandbox \"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"43c87cca37697f7532f66dc6c5566c7c69483620bcf669e84b39f2eff4b5393d\"" Sep 12 17:31:13.023834 containerd[1548]: time="2025-09-12T17:31:13.023562856Z" level=info msg="StartContainer for \"43c87cca37697f7532f66dc6c5566c7c69483620bcf669e84b39f2eff4b5393d\"" Sep 12 17:31:13.055023 systemd-networkd[1236]: cali48e1ad897c0: Gained IPv6LL Sep 12 17:31:13.094475 containerd[1548]: time="2025-09-12T17:31:13.094427887Z" level=info msg="StartContainer for \"43c87cca37697f7532f66dc6c5566c7c69483620bcf669e84b39f2eff4b5393d\" returns successfully" Sep 12 17:31:13.368864 kubelet[2641]: I0912 17:31:13.368205 2641 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:31:13.368864 kubelet[2641]: I0912 17:31:13.368295 2641 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:31:13.529839 kubelet[2641]: E0912 17:31:13.528415 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:13.542921 kubelet[2641]: I0912 17:31:13.541975 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t2lbd" podStartSLOduration=20.24092022 podStartE2EDuration="24.541956799s" podCreationTimestamp="2025-09-12 17:30:49 +0000 UTC" firstStartedPulling="2025-09-12 17:31:08.628923831 +0000 UTC m=+40.461545302" lastFinishedPulling="2025-09-12 17:31:12.92996041 +0000 UTC m=+44.762581881" observedRunningTime="2025-09-12 17:31:13.540910174 +0000 UTC m=+45.373531645" watchObservedRunningTime="2025-09-12 17:31:13.541956799 +0000 UTC m=+45.374578270" Sep 12 17:31:13.542921 kubelet[2641]: I0912 17:31:13.542160 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-58qjr" podStartSLOduration=39.542152717 podStartE2EDuration="39.542152717s" podCreationTimestamp="2025-09-12 17:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:12.549521743 +0000 UTC m=+44.382143214" watchObservedRunningTime="2025-09-12 17:31:13.542152717 +0000 UTC m=+45.374774148" Sep 12 17:31:13.631115 systemd-networkd[1236]: calie404673a48d: Gained IPv6LL Sep 12 17:31:14.355215 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:48894.service - OpenSSH per-connection server daemon (10.0.0.1:48894). Sep 12 17:31:14.376836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1893961199.mount: Deactivated successfully. Sep 12 17:31:14.403864 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 48894 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:14.405915 sshd[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:14.411479 systemd-logind[1530]: New session 8 of user core. Sep 12 17:31:14.418194 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:31:14.530428 kubelet[2641]: E0912 17:31:14.530046 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:14.756708 sshd[5338]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:14.760822 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:48894.service: Deactivated successfully. Sep 12 17:31:14.766702 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:31:14.767787 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:31:14.769035 systemd-logind[1530]: Removed session 8. Sep 12 17:31:14.955074 containerd[1548]: time="2025-09-12T17:31:14.955027095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:14.955689 containerd[1548]: time="2025-09-12T17:31:14.955654127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:31:14.956633 containerd[1548]: time="2025-09-12T17:31:14.956599753Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:14.958903 containerd[1548]: time="2025-09-12T17:31:14.958865122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:14.959692 containerd[1548]: time="2025-09-12T17:31:14.959658191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.028763195s" Sep 12 17:31:14.959755 containerd[1548]: time="2025-09-12T17:31:14.959735710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:31:14.962356 containerd[1548]: time="2025-09-12T17:31:14.960680977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:31:14.966862 containerd[1548]: time="2025-09-12T17:31:14.966791171Z" level=info msg="CreateContainer within sandbox \"65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:31:14.983920 containerd[1548]: time="2025-09-12T17:31:14.983872693Z" level=info msg="CreateContainer within sandbox \"65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"721d148be5ce19e3ef8748cad06c5f28b76c55d8523cea65492aade18950a773\"" Sep 12 17:31:14.985178 containerd[1548]: time="2025-09-12T17:31:14.985128676Z" level=info msg="StartContainer for \"721d148be5ce19e3ef8748cad06c5f28b76c55d8523cea65492aade18950a773\"" Sep 12 17:31:15.043058 containerd[1548]: time="2025-09-12T17:31:15.042940722Z" level=info msg="StartContainer for \"721d148be5ce19e3ef8748cad06c5f28b76c55d8523cea65492aade18950a773\" returns successfully" Sep 12 17:31:15.552678 kubelet[2641]: I0912 17:31:15.552122 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-gz6qp" podStartSLOduration=21.263119134 podStartE2EDuration="25.552102769s" podCreationTimestamp="2025-09-12 17:30:50 +0000 UTC" firstStartedPulling="2025-09-12 17:31:10.671493784 +0000 UTC m=+42.504115255" lastFinishedPulling="2025-09-12 17:31:14.960477419 +0000 UTC m=+46.793098890" observedRunningTime="2025-09-12 17:31:15.546123371 +0000 UTC m=+47.378744842" watchObservedRunningTime="2025-09-12 17:31:15.552102769 +0000 UTC m=+47.384724240" Sep 12 17:31:16.860271 containerd[1548]: time="2025-09-12T17:31:16.860206010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:16.861341 containerd[1548]: time="2025-09-12T17:31:16.861311755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:31:16.862690 containerd[1548]: time="2025-09-12T17:31:16.862639657Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:16.865701 containerd[1548]: time="2025-09-12T17:31:16.865668097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:16.867657 containerd[1548]: time="2025-09-12T17:31:16.867587151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.906868015s" Sep 12 17:31:16.867657 containerd[1548]: time="2025-09-12T17:31:16.867644390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:31:16.878538 containerd[1548]: time="2025-09-12T17:31:16.878368847Z" level=info msg="CreateContainer within sandbox \"33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:31:16.910398 containerd[1548]: time="2025-09-12T17:31:16.910249020Z" level=info msg="CreateContainer within sandbox \"33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0cb2b45aedbb2d7f678db7e73f7042e802cf7f61c1567545cc5b3a36e3714889\"" Sep 12 17:31:16.911495 containerd[1548]: time="2025-09-12T17:31:16.911080049Z" level=info msg="StartContainer for \"0cb2b45aedbb2d7f678db7e73f7042e802cf7f61c1567545cc5b3a36e3714889\"" Sep 12 17:31:16.977044 containerd[1548]: time="2025-09-12T17:31:16.976790488Z" level=info msg="StartContainer for \"0cb2b45aedbb2d7f678db7e73f7042e802cf7f61c1567545cc5b3a36e3714889\" returns successfully" Sep 12 17:31:18.625729 kubelet[2641]: I0912 17:31:18.625543 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64d9df5885-5z5xb" podStartSLOduration=24.539498376 podStartE2EDuration="29.625523982s" podCreationTimestamp="2025-09-12 17:30:49 +0000 UTC" firstStartedPulling="2025-09-12 17:31:11.782274376 +0000 UTC m=+43.614895847" lastFinishedPulling="2025-09-12 17:31:16.868299982 +0000 UTC m=+48.700921453" observedRunningTime="2025-09-12 17:31:17.555530071 +0000 UTC m=+49.388151582" watchObservedRunningTime="2025-09-12 17:31:18.625523982 +0000 UTC m=+50.458145453" Sep 12 17:31:19.772101 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:48900.service - OpenSSH per-connection server daemon (10.0.0.1:48900). Sep 12 17:31:19.826211 sshd[5522]: Accepted publickey for core from 10.0.0.1 port 48900 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:19.828059 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:19.835957 systemd-logind[1530]: New session 9 of user core. Sep 12 17:31:19.846151 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:31:20.145027 sshd[5522]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:20.150153 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:48900.service: Deactivated successfully. Sep 12 17:31:20.155071 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:31:20.155280 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:31:20.156490 systemd-logind[1530]: Removed session 9. Sep 12 17:31:23.400511 kubelet[2641]: I0912 17:31:23.400220 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:31:25.160113 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:38614.service - OpenSSH per-connection server daemon (10.0.0.1:38614). Sep 12 17:31:25.205015 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 38614 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:25.207068 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:25.213983 systemd-logind[1530]: New session 10 of user core. Sep 12 17:31:25.228221 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:31:25.462869 sshd[5549]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:25.472125 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:38628.service - OpenSSH per-connection server daemon (10.0.0.1:38628). Sep 12 17:31:25.472546 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:38614.service: Deactivated successfully. Sep 12 17:31:25.476300 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:31:25.477392 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:31:25.479170 systemd-logind[1530]: Removed session 10. Sep 12 17:31:25.529700 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 38628 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:25.531609 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:25.538645 systemd-logind[1530]: New session 11 of user core. Sep 12 17:31:25.549241 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:31:25.879606 sshd[5563]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:25.891025 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:38634.service - OpenSSH per-connection server daemon (10.0.0.1:38634). Sep 12 17:31:25.891583 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:38628.service: Deactivated successfully. Sep 12 17:31:25.897021 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:31:25.899893 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:31:25.904845 systemd-logind[1530]: Removed session 11. Sep 12 17:31:25.932794 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 38634 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:25.934166 sshd[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:25.938359 systemd-logind[1530]: New session 12 of user core. Sep 12 17:31:25.946101 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:31:26.180886 sshd[5576]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:26.186758 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:38634.service: Deactivated successfully. Sep 12 17:31:26.189734 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:31:26.190091 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:31:26.191678 systemd-logind[1530]: Removed session 12. Sep 12 17:31:28.262508 containerd[1548]: time="2025-09-12T17:31:28.262466191Z" level=info msg="StopPodSandbox for \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\"" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.311 [WARNING][5651] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" WorkloadEndpoint="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.312 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.312 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" iface="eth0" netns="" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.312 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.312 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.346 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.346 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.346 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.356 [WARNING][5662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.356 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.358 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:28.362368 containerd[1548]: 2025-09-12 17:31:28.360 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.362368 containerd[1548]: time="2025-09-12T17:31:28.362239744Z" level=info msg="TearDown network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\" successfully" Sep 12 17:31:28.362368 containerd[1548]: time="2025-09-12T17:31:28.362271824Z" level=info msg="StopPodSandbox for \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\" returns successfully" Sep 12 17:31:28.363015 containerd[1548]: time="2025-09-12T17:31:28.362789298Z" level=info msg="RemovePodSandbox for \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\"" Sep 12 17:31:28.371121 containerd[1548]: time="2025-09-12T17:31:28.371067684Z" level=info msg="Forcibly stopping sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\"" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.404 [WARNING][5680] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" WorkloadEndpoint="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.404 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.404 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" iface="eth0" netns="" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.404 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.404 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.422 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.422 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.422 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.431 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.431 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" HandleID="k8s-pod-network.41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Workload="localhost-k8s-whisker--6777b9698d--87svx-eth0" Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.433 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:28.436510 containerd[1548]: 2025-09-12 17:31:28.434 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f" Sep 12 17:31:28.437834 containerd[1548]: time="2025-09-12T17:31:28.436939460Z" level=info msg="TearDown network for sandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\" successfully" Sep 12 17:31:28.444527 containerd[1548]: time="2025-09-12T17:31:28.444472615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:28.444672 containerd[1548]: time="2025-09-12T17:31:28.444582694Z" level=info msg="RemovePodSandbox \"41550c9505dbeec0db664cf36236749f9e236912260714de9226ea3aefa36d5f\" returns successfully" Sep 12 17:31:28.445177 containerd[1548]: time="2025-09-12T17:31:28.445152248Z" level=info msg="StopPodSandbox for \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\"" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.481 [WARNING][5707] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c7e50a73-6884-4846-84af-b99c62b21ac0", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459", Pod:"coredns-7c65d6cfc9-tmx7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3fc8ae14d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.481 [INFO][5707] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.481 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" iface="eth0" netns="" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.481 [INFO][5707] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.481 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.505 [INFO][5716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.505 [INFO][5716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.505 [INFO][5716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.513 [WARNING][5716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.514 [INFO][5716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.515 [INFO][5716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:28.519135 containerd[1548]: 2025-09-12 17:31:28.517 [INFO][5707] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.522378 containerd[1548]: time="2025-09-12T17:31:28.522313256Z" level=info msg="TearDown network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\" successfully" Sep 12 17:31:28.522378 containerd[1548]: time="2025-09-12T17:31:28.522353656Z" level=info msg="StopPodSandbox for \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\" returns successfully" Sep 12 17:31:28.522989 containerd[1548]: time="2025-09-12T17:31:28.522958369Z" level=info msg="RemovePodSandbox for \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\"" Sep 12 17:31:28.523047 containerd[1548]: time="2025-09-12T17:31:28.522997409Z" level=info msg="Forcibly stopping sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\"" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.566 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c7e50a73-6884-4846-84af-b99c62b21ac0", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0581f40e61f66f6ad45d75eda0c65668a590192cc95f7c84275da7137115459", Pod:"coredns-7c65d6cfc9-tmx7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3fc8ae14d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.566 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.566 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" iface="eth0" netns="" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.566 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.566 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.590 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.590 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.590 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.599 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.599 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" HandleID="k8s-pod-network.579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Workload="localhost-k8s-coredns--7c65d6cfc9--tmx7c-eth0" Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.601 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:28.605242 containerd[1548]: 2025-09-12 17:31:28.603 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c" Sep 12 17:31:28.605675 containerd[1548]: time="2025-09-12T17:31:28.605288679Z" level=info msg="TearDown network for sandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\" successfully" Sep 12 17:31:28.683554 containerd[1548]: time="2025-09-12T17:31:28.683489516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:28.683743 containerd[1548]: time="2025-09-12T17:31:28.683572515Z" level=info msg="RemovePodSandbox \"579cf2d2daf90a819e021e34ad404c44d05577a2ca90559cfa76fc44676cb69c\" returns successfully" Sep 12 17:31:28.684165 containerd[1548]: time="2025-09-12T17:31:28.684140869Z" level=info msg="StopPodSandbox for \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\"" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.722 [WARNING][5762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9d7de93-a50e-470c-992a-6e1a6cde9578", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0", Pod:"calico-apiserver-75f496c6fb-6462h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87e24853e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.722 [INFO][5762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.722 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" iface="eth0" netns="" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.722 [INFO][5762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.722 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.743 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.743 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.743 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.753 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.754 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.755 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:28.759212 containerd[1548]: 2025-09-12 17:31:28.757 [INFO][5762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.759964 containerd[1548]: time="2025-09-12T17:31:28.759265341Z" level=info msg="TearDown network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\" successfully" Sep 12 17:31:28.759964 containerd[1548]: time="2025-09-12T17:31:28.759294180Z" level=info msg="StopPodSandbox for \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\" returns successfully" Sep 12 17:31:28.759964 containerd[1548]: time="2025-09-12T17:31:28.759841774Z" level=info msg="RemovePodSandbox for \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\"" Sep 12 17:31:28.759964 containerd[1548]: time="2025-09-12T17:31:28.759871014Z" level=info msg="Forcibly stopping sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\"" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.803 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9d7de93-a50e-470c-992a-6e1a6cde9578", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6848a16324fa486b78509c847e5e08ec40537f0dc931e437357f28401d217ed0", Pod:"calico-apiserver-75f496c6fb-6462h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87e24853e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.804 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.804 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" iface="eth0" netns="" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.804 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.804 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.838 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.838 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.838 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.847 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.847 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" HandleID="k8s-pod-network.c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Workload="localhost-k8s-calico--apiserver--75f496c6fb--6462h-eth0" Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.850 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:28.854190 containerd[1548]: 2025-09-12 17:31:28.852 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52" Sep 12 17:31:28.856160 containerd[1548]: time="2025-09-12T17:31:28.854636304Z" level=info msg="TearDown network for sandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\" successfully" Sep 12 17:31:28.886069 containerd[1548]: time="2025-09-12T17:31:28.886015389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:28.886381 containerd[1548]: time="2025-09-12T17:31:28.886347025Z" level=info msg="RemovePodSandbox \"c7e07416fc26ab599fb7130a81e54202e511815cafce1262f0e3c53053a71d52\" returns successfully" Sep 12 17:31:28.887068 containerd[1548]: time="2025-09-12T17:31:28.887014818Z" level=info msg="StopPodSandbox for \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\"" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.923 [WARNING][5817] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t2lbd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c", Pod:"csi-node-driver-t2lbd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92a09bf834b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.924 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.924 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" iface="eth0" netns="" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.924 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.924 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.945 [INFO][5825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.946 [INFO][5825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.946 [INFO][5825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.956 [WARNING][5825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.956 [INFO][5825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.959 [INFO][5825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:28.965837 containerd[1548]: 2025-09-12 17:31:28.961 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:28.966241 containerd[1548]: time="2025-09-12T17:31:28.965871047Z" level=info msg="TearDown network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\" successfully" Sep 12 17:31:28.966241 containerd[1548]: time="2025-09-12T17:31:28.965900207Z" level=info msg="StopPodSandbox for \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\" returns successfully" Sep 12 17:31:28.966696 containerd[1548]: time="2025-09-12T17:31:28.966648999Z" level=info msg="RemovePodSandbox for \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\"" Sep 12 17:31:28.966696 containerd[1548]: time="2025-09-12T17:31:28.966685158Z" level=info msg="Forcibly stopping sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\"" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.007 [WARNING][5840] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t2lbd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09e6a9f7-4303-4b5f-ad99-a3e9b65f6620", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfc39a10916c0a00b728d7e261a4bda14749c34e385ce478ae198723a69fa63c", Pod:"csi-node-driver-t2lbd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92a09bf834b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.008 [INFO][5840] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.008 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" iface="eth0" netns="" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.008 [INFO][5840] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.008 [INFO][5840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.028 [INFO][5849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.028 [INFO][5849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.028 [INFO][5849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.040 [WARNING][5849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.040 [INFO][5849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" HandleID="k8s-pod-network.faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Workload="localhost-k8s-csi--node--driver--t2lbd-eth0" Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.043 [INFO][5849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.047469 containerd[1548]: 2025-09-12 17:31:29.046 [INFO][5840] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515" Sep 12 17:31:29.047975 containerd[1548]: time="2025-09-12T17:31:29.047513451Z" level=info msg="TearDown network for sandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\" successfully" Sep 12 17:31:29.050507 containerd[1548]: time="2025-09-12T17:31:29.050455698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:29.050660 containerd[1548]: time="2025-09-12T17:31:29.050531497Z" level=info msg="RemovePodSandbox \"faef3df3051654fa51166d41ab55d18090a4472bc7b73ddbc774fdd48fc12515\" returns successfully" Sep 12 17:31:29.051272 containerd[1548]: time="2025-09-12T17:31:29.050999812Z" level=info msg="StopPodSandbox for \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\"" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.089 [WARNING][5866] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gz6qp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d33a2b08-c505-4cce-a314-e4b791e0c009", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2", Pod:"goldmane-7988f88666-gz6qp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45b26671a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.090 [INFO][5866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.090 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" iface="eth0" netns="" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.090 [INFO][5866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.090 [INFO][5866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.109 [INFO][5874] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.109 [INFO][5874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.109 [INFO][5874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.119 [WARNING][5874] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.119 [INFO][5874] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.122 [INFO][5874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.125845 containerd[1548]: 2025-09-12 17:31:29.124 [INFO][5866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.127640 containerd[1548]: time="2025-09-12T17:31:29.127607035Z" level=info msg="TearDown network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\" successfully" Sep 12 17:31:29.127827 containerd[1548]: time="2025-09-12T17:31:29.127697394Z" level=info msg="StopPodSandbox for \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\" returns successfully" Sep 12 17:31:29.128208 containerd[1548]: time="2025-09-12T17:31:29.128186869Z" level=info msg="RemovePodSandbox for \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\"" Sep 12 17:31:29.128555 containerd[1548]: time="2025-09-12T17:31:29.128305668Z" level=info msg="Forcibly stopping sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\"" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.166 [WARNING][5892] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gz6qp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d33a2b08-c505-4cce-a314-e4b791e0c009", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65f6f4cf9bb4ae0107bb1a278243ba78e0d1c0744680e099847e99008bffadc2", Pod:"goldmane-7988f88666-gz6qp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45b26671a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.166 [INFO][5892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.166 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" iface="eth0" netns="" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.166 [INFO][5892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.166 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.203 [INFO][5901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.203 [INFO][5901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.203 [INFO][5901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.217 [WARNING][5901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.217 [INFO][5901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" HandleID="k8s-pod-network.a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Workload="localhost-k8s-goldmane--7988f88666--gz6qp-eth0" Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.219 [INFO][5901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.223473 containerd[1548]: 2025-09-12 17:31:29.221 [INFO][5892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d" Sep 12 17:31:29.224234 containerd[1548]: time="2025-09-12T17:31:29.223960798Z" level=info msg="TearDown network for sandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\" successfully" Sep 12 17:31:29.229081 containerd[1548]: time="2025-09-12T17:31:29.229036421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:29.229457 containerd[1548]: time="2025-09-12T17:31:29.229312618Z" level=info msg="RemovePodSandbox \"a19066ca4631c6d9583fe630f9b21ba9ae3c61e63a1d69319007c3fc77b3158d\" returns successfully" Sep 12 17:31:29.230631 containerd[1548]: time="2025-09-12T17:31:29.229951251Z" level=info msg="StopPodSandbox for \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\"" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.268 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0", GenerateName:"calico-kube-controllers-64d9df5885-", Namespace:"calico-system", SelfLink:"", UID:"89f3f547-52ae-4646-86ac-31102c426a8a", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d9df5885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590", Pod:"calico-kube-controllers-64d9df5885-5z5xb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie404673a48d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.268 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.268 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" iface="eth0" netns="" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.268 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.268 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.288 [INFO][5926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.288 [INFO][5926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.288 [INFO][5926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.298 [WARNING][5926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.298 [INFO][5926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.303 [INFO][5926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.307498 containerd[1548]: 2025-09-12 17:31:29.305 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.307498 containerd[1548]: time="2025-09-12T17:31:29.307430865Z" level=info msg="TearDown network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\" successfully" Sep 12 17:31:29.307498 containerd[1548]: time="2025-09-12T17:31:29.307459665Z" level=info msg="StopPodSandbox for \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\" returns successfully" Sep 12 17:31:29.308258 containerd[1548]: time="2025-09-12T17:31:29.307895460Z" level=info msg="RemovePodSandbox for \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\"" Sep 12 17:31:29.308258 containerd[1548]: time="2025-09-12T17:31:29.307925899Z" level=info msg="Forcibly stopping sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\"" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.346 [WARNING][5944] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0", GenerateName:"calico-kube-controllers-64d9df5885-", Namespace:"calico-system", SelfLink:"", UID:"89f3f547-52ae-4646-86ac-31102c426a8a", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d9df5885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33d1daf552e5e64134b4c8deb7721f5a70e181ac714a54b2ac4e1013f4843590", Pod:"calico-kube-controllers-64d9df5885-5z5xb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie404673a48d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.346 [INFO][5944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.346 [INFO][5944] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" iface="eth0" netns="" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.346 [INFO][5944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.346 [INFO][5944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.368 [INFO][5969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.368 [INFO][5969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.368 [INFO][5969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.378 [WARNING][5969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.378 [INFO][5969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" HandleID="k8s-pod-network.230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Workload="localhost-k8s-calico--kube--controllers--64d9df5885--5z5xb-eth0" Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.380 [INFO][5969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.384953 containerd[1548]: 2025-09-12 17:31:29.382 [INFO][5944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891" Sep 12 17:31:29.384953 containerd[1548]: time="2025-09-12T17:31:29.384909719Z" level=info msg="TearDown network for sandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\" successfully" Sep 12 17:31:29.390602 containerd[1548]: time="2025-09-12T17:31:29.390523456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:29.390706 containerd[1548]: time="2025-09-12T17:31:29.390628495Z" level=info msg="RemovePodSandbox \"230f8a3e0e110c7b6f9808798190aca1f058241d415a66417eebde760b23f891\" returns successfully" Sep 12 17:31:29.391184 containerd[1548]: time="2025-09-12T17:31:29.391160649Z" level=info msg="StopPodSandbox for \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\"" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.424 [WARNING][5991] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"824c5a12-bcb9-44ed-a3d8-24c299fba85d", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2", Pod:"calico-apiserver-75f496c6fb-7szgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid06122c5ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.425 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.425 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" iface="eth0" netns="" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.425 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.425 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.445 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.445 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.445 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.454 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.454 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.456 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.460038 containerd[1548]: 2025-09-12 17:31:29.458 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.460495 containerd[1548]: time="2025-09-12T17:31:29.460081158Z" level=info msg="TearDown network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\" successfully" Sep 12 17:31:29.460495 containerd[1548]: time="2025-09-12T17:31:29.460106158Z" level=info msg="StopPodSandbox for \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\" returns successfully" Sep 12 17:31:29.460565 containerd[1548]: time="2025-09-12T17:31:29.460543793Z" level=info msg="RemovePodSandbox for \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\"" Sep 12 17:31:29.460613 containerd[1548]: time="2025-09-12T17:31:29.460574513Z" level=info msg="Forcibly stopping sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\"" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.496 [WARNING][6017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0", GenerateName:"calico-apiserver-75f496c6fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"824c5a12-bcb9-44ed-a3d8-24c299fba85d", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f496c6fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a8456a14e39a1b175a725564610f56d4af806182550c2d3aec40db9c10892f2", Pod:"calico-apiserver-75f496c6fb-7szgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid06122c5ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.496 [INFO][6017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.496 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" iface="eth0" netns="" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.496 [INFO][6017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.496 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.514 [INFO][6025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.514 [INFO][6025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.514 [INFO][6025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.526 [WARNING][6025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.526 [INFO][6025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" HandleID="k8s-pod-network.05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Workload="localhost-k8s-calico--apiserver--75f496c6fb--7szgf-eth0" Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.528 [INFO][6025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.534871 containerd[1548]: 2025-09-12 17:31:29.532 [INFO][6017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26" Sep 12 17:31:29.535315 containerd[1548]: time="2025-09-12T17:31:29.534900642Z" level=info msg="TearDown network for sandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\" successfully" Sep 12 17:31:29.542454 containerd[1548]: time="2025-09-12T17:31:29.542395758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:29.542570 containerd[1548]: time="2025-09-12T17:31:29.542472317Z" level=info msg="RemovePodSandbox \"05a8a83e0a4fccb26b4b7f209abc94a5c022c706f207230d3c6b9b3d79b6fd26\" returns successfully" Sep 12 17:31:29.543072 containerd[1548]: time="2025-09-12T17:31:29.543029191Z" level=info msg="StopPodSandbox for \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\"" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.585 [WARNING][6042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1786da27-c74b-428e-9360-4f44ff994f41", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e", Pod:"coredns-7c65d6cfc9-58qjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48e1ad897c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.585 [INFO][6042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.585 [INFO][6042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" iface="eth0" netns="" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.585 [INFO][6042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.585 [INFO][6042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.606 [INFO][6051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.606 [INFO][6051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.606 [INFO][6051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.617 [WARNING][6051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.617 [INFO][6051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.619 [INFO][6051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.624757 containerd[1548]: 2025-09-12 17:31:29.622 [INFO][6042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.625172 containerd[1548]: time="2025-09-12T17:31:29.624810917Z" level=info msg="TearDown network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\" successfully" Sep 12 17:31:29.625172 containerd[1548]: time="2025-09-12T17:31:29.624836636Z" level=info msg="StopPodSandbox for \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\" returns successfully" Sep 12 17:31:29.625348 containerd[1548]: time="2025-09-12T17:31:29.625321791Z" level=info msg="RemovePodSandbox for \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\"" Sep 12 17:31:29.625407 containerd[1548]: time="2025-09-12T17:31:29.625357831Z" level=info msg="Forcibly stopping sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\"" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.659 [WARNING][6069] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1786da27-c74b-428e-9360-4f44ff994f41", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6893d99930d92509ff46aaf4c1f5352a3183b6e3cbf1ed9cd2587deccfc8075e", Pod:"coredns-7c65d6cfc9-58qjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48e1ad897c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.660 [INFO][6069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.660 [INFO][6069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" iface="eth0" netns="" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.660 [INFO][6069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.660 [INFO][6069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.682 [INFO][6078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.682 [INFO][6078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.682 [INFO][6078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.705 [WARNING][6078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.705 [INFO][6078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" HandleID="k8s-pod-network.15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Workload="localhost-k8s-coredns--7c65d6cfc9--58qjr-eth0" Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.709 [INFO][6078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:29.712976 containerd[1548]: 2025-09-12 17:31:29.711 [INFO][6069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa" Sep 12 17:31:29.713437 containerd[1548]: time="2025-09-12T17:31:29.713027530Z" level=info msg="TearDown network for sandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\" successfully" Sep 12 17:31:29.745540 containerd[1548]: time="2025-09-12T17:31:29.745487887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:31:29.745650 containerd[1548]: time="2025-09-12T17:31:29.745576566Z" level=info msg="RemovePodSandbox \"15dcfded72b116f2d5e825861033b4a4f63a5d20f27bd1ea66772a2c1763c9fa\" returns successfully" Sep 12 17:31:31.194127 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:59296.service - OpenSSH per-connection server daemon (10.0.0.1:59296). Sep 12 17:31:31.240011 sshd[6086]: Accepted publickey for core from 10.0.0.1 port 59296 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:31.241762 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:31.245848 systemd-logind[1530]: New session 13 of user core. Sep 12 17:31:31.253100 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:31:31.528019 sshd[6086]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:31.538087 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:59312.service - OpenSSH per-connection server daemon (10.0.0.1:59312). Sep 12 17:31:31.541449 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:59296.service: Deactivated successfully. Sep 12 17:31:31.543545 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:31:31.546870 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:31:31.549461 systemd-logind[1530]: Removed session 13. Sep 12 17:31:31.575694 sshd[6099]: Accepted publickey for core from 10.0.0.1 port 59312 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:31.577170 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:31.581356 systemd-logind[1530]: New session 14 of user core. Sep 12 17:31:31.586053 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:31:31.805028 sshd[6099]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:31.817064 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:59322.service - OpenSSH per-connection server daemon (10.0.0.1:59322). Sep 12 17:31:31.817466 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:59312.service: Deactivated successfully. Sep 12 17:31:31.821847 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:31:31.822433 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:31:31.824703 systemd-logind[1530]: Removed session 14. Sep 12 17:31:31.860299 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 59322 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:31.861080 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:31.865307 systemd-logind[1530]: New session 15 of user core. Sep 12 17:31:31.878105 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:31:33.512860 sshd[6112]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:33.527479 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:59326.service - OpenSSH per-connection server daemon (10.0.0.1:59326). Sep 12 17:31:33.528058 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:59322.service: Deactivated successfully. Sep 12 17:31:33.536083 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:31:33.536368 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:31:33.542355 systemd-logind[1530]: Removed session 15. Sep 12 17:31:33.564142 sshd[6133]: Accepted publickey for core from 10.0.0.1 port 59326 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:33.565516 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:33.570566 systemd-logind[1530]: New session 16 of user core. Sep 12 17:31:33.580205 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:31:34.190430 sshd[6133]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:34.200852 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:59330.service - OpenSSH per-connection server daemon (10.0.0.1:59330). Sep 12 17:31:34.201570 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:59326.service: Deactivated successfully. Sep 12 17:31:34.203691 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:31:34.205370 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:31:34.206579 systemd-logind[1530]: Removed session 16. Sep 12 17:31:34.233193 sshd[6151]: Accepted publickey for core from 10.0.0.1 port 59330 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:34.234654 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:34.240012 systemd-logind[1530]: New session 17 of user core. Sep 12 17:31:34.246215 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:31:34.427648 sshd[6151]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:34.431007 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:59330.service: Deactivated successfully. Sep 12 17:31:34.433980 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:31:34.435096 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:31:34.437059 systemd-logind[1530]: Removed session 17. Sep 12 17:31:39.439105 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:59336.service - OpenSSH per-connection server daemon (10.0.0.1:59336). Sep 12 17:31:39.474027 sshd[6195]: Accepted publickey for core from 10.0.0.1 port 59336 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:39.475549 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:39.484286 systemd-logind[1530]: New session 18 of user core. Sep 12 17:31:39.492202 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:31:39.653497 sshd[6195]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:39.657426 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:59336.service: Deactivated successfully. Sep 12 17:31:39.660664 systemd-logind[1530]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:31:39.661506 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:31:39.664596 systemd-logind[1530]: Removed session 18. Sep 12 17:31:44.662083 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:49110.service - OpenSSH per-connection server daemon (10.0.0.1:49110). Sep 12 17:31:44.699816 sshd[6217]: Accepted publickey for core from 10.0.0.1 port 49110 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:44.703955 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:44.708781 systemd-logind[1530]: New session 19 of user core. Sep 12 17:31:44.715062 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:31:44.846778 sshd[6217]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:44.850967 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:49110.service: Deactivated successfully. Sep 12 17:31:44.852868 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:31:44.853731 systemd-logind[1530]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:31:44.855117 systemd-logind[1530]: Removed session 19. Sep 12 17:31:45.285863 kubelet[2641]: E0912 17:31:45.285829 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:49.857046 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:49124.service - OpenSSH per-connection server daemon (10.0.0.1:49124). Sep 12 17:31:49.891071 sshd[6233]: Accepted publickey for core from 10.0.0.1 port 49124 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:31:49.892464 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:49.896739 systemd-logind[1530]: New session 20 of user core. Sep 12 17:31:49.906080 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:31:50.102874 sshd[6233]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:50.106485 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:49124.service: Deactivated successfully. Sep 12 17:31:50.108968 systemd-logind[1530]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:31:50.109046 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:31:50.110777 systemd-logind[1530]: Removed session 20.