Sep 13 00:04:18.857030 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:04:18.857062 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:04:18.857085 kernel: KASLR enabled Sep 13 00:04:18.857091 kernel: efi: EFI v2.7 by EDK II Sep 13 00:04:18.857097 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 13 00:04:18.857102 kernel: random: crng init done Sep 13 00:04:18.857110 kernel: ACPI: Early table checksum verification disabled Sep 13 00:04:18.857116 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 13 00:04:18.857122 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:04:18.857138 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857144 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857150 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857157 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857163 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857170 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857178 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857185 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857191 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:04:18.857197 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:04:18.857204 kernel: NUMA: Failed to initialise from firmware Sep 13 00:04:18.857210 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:04:18.857216 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 13 00:04:18.857222 kernel: Zone ranges: Sep 13 00:04:18.857229 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:04:18.857235 kernel: DMA32 empty Sep 13 00:04:18.857243 kernel: Normal empty Sep 13 00:04:18.857249 kernel: Movable zone start for each node Sep 13 00:04:18.857255 kernel: Early memory node ranges Sep 13 00:04:18.857262 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 13 00:04:18.857268 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 13 00:04:18.857274 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 13 00:04:18.857281 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 13 00:04:18.857287 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 13 00:04:18.857293 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 13 00:04:18.857299 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 13 00:04:18.857306 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:04:18.857312 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:04:18.857320 kernel: psci: probing for conduit method from ACPI. Sep 13 00:04:18.857326 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:04:18.857332 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:04:18.857341 kernel: psci: Trusted OS migration not required Sep 13 00:04:18.857348 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:04:18.857355 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:04:18.857363 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:04:18.857370 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:04:18.857377 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:04:18.857383 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:04:18.857390 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:04:18.857397 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:04:18.857404 kernel: CPU features: detected: Spectre-v4 Sep 13 00:04:18.857410 kernel: CPU features: detected: Spectre-BHB Sep 13 00:04:18.857417 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:04:18.857424 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:04:18.857431 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:04:18.857438 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:04:18.857445 kernel: alternatives: applying boot alternatives Sep 13 00:04:18.857453 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:04:18.857460 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:04:18.857467 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:04:18.857474 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:04:18.857480 kernel: Fallback order for Node 0: 0 Sep 13 00:04:18.857487 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:04:18.857494 kernel: Policy zone: DMA Sep 13 00:04:18.857500 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:04:18.857508 kernel: software IO TLB: area num 4. Sep 13 00:04:18.857515 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 13 00:04:18.857522 kernel: Memory: 2386336K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185952K reserved, 0K cma-reserved) Sep 13 00:04:18.857529 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:04:18.857536 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:04:18.857543 kernel: rcu: RCU event tracing is enabled. Sep 13 00:04:18.857551 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:04:18.857557 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:04:18.857564 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:04:18.857571 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:04:18.857578 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:04:18.857586 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:04:18.857593 kernel: GICv3: 256 SPIs implemented Sep 13 00:04:18.857600 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:04:18.857607 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:04:18.857613 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:04:18.857620 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:04:18.857627 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:04:18.857633 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:04:18.857640 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:04:18.857647 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 13 00:04:18.857654 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 13 00:04:18.857661 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:04:18.857669 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:04:18.857676 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:04:18.857683 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:04:18.857690 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:04:18.857696 kernel: arm-pv: using stolen time PV Sep 13 00:04:18.857703 kernel: Console: colour dummy device 80x25 Sep 13 00:04:18.857710 kernel: ACPI: Core revision 20230628 Sep 13 00:04:18.857717 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:04:18.857724 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:04:18.857731 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:04:18.857739 kernel: landlock: Up and running. Sep 13 00:04:18.857746 kernel: SELinux: Initializing. Sep 13 00:04:18.857753 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:04:18.857760 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:04:18.857767 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:04:18.857774 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:04:18.857781 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:04:18.857788 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:04:18.857795 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:04:18.857803 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:04:18.857810 kernel: Remapping and enabling EFI services. Sep 13 00:04:18.857817 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:04:18.857824 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:04:18.857831 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:04:18.857838 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 13 00:04:18.857845 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:04:18.857852 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:04:18.857859 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:04:18.857866 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:04:18.857874 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 13 00:04:18.857881 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:04:18.857893 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:04:18.857901 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:04:18.857909 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:04:18.857916 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 13 00:04:18.857923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:04:18.857934 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:04:18.857941 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:04:18.857950 kernel: SMP: Total of 4 processors activated. Sep 13 00:04:18.857957 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:04:18.857965 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:04:18.857972 kernel: CPU features: detected: Common not Private translations Sep 13 00:04:18.857979 kernel: CPU features: detected: CRC32 instructions Sep 13 00:04:18.857986 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:04:18.857994 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:04:18.858001 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:04:18.858010 kernel: CPU features: detected: Privileged Access Never Sep 13 00:04:18.858017 kernel: CPU features: detected: RAS Extension Support Sep 13 00:04:18.858024 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:04:18.858031 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:04:18.858057 kernel: alternatives: applying system-wide alternatives Sep 13 00:04:18.858064 kernel: devtmpfs: initialized Sep 13 00:04:18.858072 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:04:18.858080 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:04:18.858087 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:04:18.858096 kernel: SMBIOS 3.0.0 present. Sep 13 00:04:18.858104 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 13 00:04:18.858111 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:04:18.858118 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:04:18.858126 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:04:18.858138 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:04:18.858146 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:04:18.858153 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 13 00:04:18.858160 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:04:18.858170 kernel: cpuidle: using governor menu Sep 13 00:04:18.858177 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:04:18.858185 kernel: ASID allocator initialised with 32768 entries Sep 13 00:04:18.858192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:04:18.858199 kernel: Serial: AMBA PL011 UART driver Sep 13 00:04:18.858206 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:04:18.858214 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:04:18.858221 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:04:18.858228 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:04:18.858237 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:04:18.858245 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:04:18.858252 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:04:18.858259 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:04:18.858266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:04:18.858274 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:04:18.858281 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:04:18.858288 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:04:18.858295 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:04:18.858304 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:04:18.858311 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:04:18.858318 kernel: ACPI: Interpreter enabled Sep 13 00:04:18.858326 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:04:18.858333 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:04:18.858340 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:04:18.858347 kernel: printk: console [ttyAMA0] enabled Sep 13 00:04:18.858355 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:04:18.858498 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:04:18.858573 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:04:18.858647 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:04:18.858709 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:04:18.858772 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:04:18.858782 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:04:18.858790 kernel: PCI host bridge to bus 0000:00 Sep 13 00:04:18.858860 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:04:18.858923 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:04:18.858981 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:04:18.859048 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:04:18.859152 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:04:18.859232 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:04:18.859299 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:04:18.859369 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:04:18.859435 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:04:18.859503 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:04:18.859571 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:04:18.859637 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:04:18.859698 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:04:18.859757 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:04:18.859816 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:04:18.859826 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:04:18.859834 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:04:18.859841 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:04:18.859848 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:04:18.859855 kernel: iommu: Default domain type: Translated Sep 13 00:04:18.859863 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:04:18.859870 kernel: efivars: Registered efivars operations Sep 13 00:04:18.859877 kernel: vgaarb: loaded Sep 13 00:04:18.859886 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:04:18.859893 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:04:18.859901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:04:18.859908 kernel: pnp: PnP ACPI init Sep 13 00:04:18.859982 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:04:18.859992 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:04:18.860000 kernel: NET: Registered PF_INET protocol family Sep 13 00:04:18.860007 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:04:18.860017 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:04:18.860024 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:04:18.860031 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:04:18.860047 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:04:18.860069 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:04:18.860077 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:04:18.860084 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:04:18.860092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:04:18.860099 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:04:18.860108 kernel: kvm [1]: HYP mode not available Sep 13 00:04:18.860116 kernel: Initialise system trusted keyrings Sep 13 00:04:18.860123 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:04:18.860136 kernel: Key type asymmetric registered Sep 13 00:04:18.860144 kernel: Asymmetric key parser 'x509' registered Sep 13 00:04:18.860151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:04:18.860158 kernel: io scheduler mq-deadline registered Sep 13 00:04:18.860166 kernel: io scheduler kyber registered Sep 13 00:04:18.860173 kernel: io scheduler bfq registered Sep 13 00:04:18.860182 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:04:18.860189 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:04:18.860197 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:04:18.860276 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:04:18.860286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:04:18.860293 kernel: thunder_xcv, ver 1.0 Sep 13 00:04:18.860301 kernel: thunder_bgx, ver 1.0 Sep 13 00:04:18.860308 kernel: nicpf, ver 1.0 Sep 13 00:04:18.860316 kernel: nicvf, ver 1.0 Sep 13 00:04:18.860394 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:04:18.860457 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:04:18 UTC (1757721858) Sep 13 00:04:18.860467 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:04:18.860475 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:04:18.860482 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:04:18.860490 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:04:18.860497 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:04:18.860505 kernel: Segment Routing with IPv6 Sep 13 00:04:18.860514 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:04:18.860522 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:04:18.860529 kernel: Key type dns_resolver registered Sep 13 00:04:18.860536 kernel: registered taskstats version 1 Sep 13 00:04:18.860543 kernel: Loading compiled-in X.509 certificates Sep 13 00:04:18.860551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:04:18.860558 kernel: Key type .fscrypt registered Sep 13 00:04:18.860565 kernel: Key type fscrypt-provisioning registered Sep 13 00:04:18.860573 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:04:18.860581 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:04:18.860589 kernel: ima: No architecture policies found Sep 13 00:04:18.860596 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:04:18.860603 kernel: clk: Disabling unused clocks Sep 13 00:04:18.860610 kernel: Freeing unused kernel memory: 39488K Sep 13 00:04:18.860618 kernel: Run /init as init process Sep 13 00:04:18.860625 kernel: with arguments: Sep 13 00:04:18.860632 kernel: /init Sep 13 00:04:18.860639 kernel: with environment: Sep 13 00:04:18.860648 kernel: HOME=/ Sep 13 00:04:18.860655 kernel: TERM=linux Sep 13 00:04:18.860662 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:04:18.860671 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:04:18.860681 systemd[1]: Detected virtualization kvm. Sep 13 00:04:18.860689 systemd[1]: Detected architecture arm64. Sep 13 00:04:18.860697 systemd[1]: Running in initrd. Sep 13 00:04:18.860704 systemd[1]: No hostname configured, using default hostname. Sep 13 00:04:18.860713 systemd[1]: Hostname set to . Sep 13 00:04:18.860721 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:04:18.860729 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:04:18.860737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:04:18.860745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:04:18.860753 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:04:18.860762 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:04:18.860770 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:04:18.860779 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:04:18.860788 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:04:18.860797 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:04:18.860805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:04:18.860813 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:04:18.860820 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:04:18.860830 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:04:18.860837 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:04:18.860845 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:04:18.860853 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:04:18.860861 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:04:18.860870 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:04:18.860878 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:04:18.860886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:04:18.860894 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:04:18.860904 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:04:18.860912 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:04:18.860920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:04:18.860928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:04:18.860936 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:04:18.860943 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:04:18.860951 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:04:18.860960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:04:18.860967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:18.860977 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:04:18.860985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:04:18.860992 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:04:18.861019 systemd-journald[237]: Collecting audit messages is disabled. Sep 13 00:04:18.861064 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:04:18.861074 systemd-journald[237]: Journal started Sep 13 00:04:18.861094 systemd-journald[237]: Runtime Journal (/run/log/journal/aee1f8e229c34139b219ef70c5189f90) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:04:18.854790 systemd-modules-load[239]: Inserted module 'overlay' Sep 13 00:04:18.865381 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:04:18.865946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:18.869932 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:04:18.871057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:04:18.872562 kernel: Bridge firewalling registered Sep 13 00:04:18.871486 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 13 00:04:18.873689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:04:18.887221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:18.888866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:04:18.890482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:04:18.893554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:04:18.900723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:18.901953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:04:18.908336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:18.909552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:04:18.917204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:04:18.919221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:04:18.929341 dracut-cmdline[275]: dracut-dracut-053 Sep 13 00:04:18.932322 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:04:18.947276 systemd-resolved[278]: Positive Trust Anchors: Sep 13 00:04:18.947295 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:04:18.947327 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:04:18.953825 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 13 00:04:18.955158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:04:18.956442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:04:19.005071 kernel: SCSI subsystem initialized Sep 13 00:04:19.010052 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:04:19.017065 kernel: iscsi: registered transport (tcp) Sep 13 00:04:19.031078 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:04:19.031141 kernel: QLogic iSCSI HBA Driver Sep 13 00:04:19.074876 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:04:19.084202 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:04:19.101499 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:04:19.101552 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:04:19.102614 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:04:19.148073 kernel: raid6: neonx8 gen() 15736 MB/s Sep 13 00:04:19.165083 kernel: raid6: neonx4 gen() 15637 MB/s Sep 13 00:04:19.182066 kernel: raid6: neonx2 gen() 13237 MB/s Sep 13 00:04:19.199064 kernel: raid6: neonx1 gen() 10485 MB/s Sep 13 00:04:19.216063 kernel: raid6: int64x8 gen() 6956 MB/s Sep 13 00:04:19.233063 kernel: raid6: int64x4 gen() 7337 MB/s Sep 13 00:04:19.250066 kernel: raid6: int64x2 gen() 6121 MB/s Sep 13 00:04:19.267228 kernel: raid6: int64x1 gen() 5047 MB/s Sep 13 00:04:19.267240 kernel: raid6: using algorithm neonx8 gen() 15736 MB/s Sep 13 00:04:19.285196 kernel: raid6: .... xor() 12037 MB/s, rmw enabled Sep 13 00:04:19.285210 kernel: raid6: using neon recovery algorithm Sep 13 00:04:19.291511 kernel: xor: measuring software checksum speed Sep 13 00:04:19.291534 kernel: 8regs : 19074 MB/sec Sep 13 00:04:19.291544 kernel: 32regs : 19693 MB/sec Sep 13 00:04:19.292191 kernel: arm64_neon : 26919 MB/sec Sep 13 00:04:19.292208 kernel: xor: using function: arm64_neon (26919 MB/sec) Sep 13 00:04:19.341067 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:04:19.351475 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:04:19.362190 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:04:19.373236 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 13 00:04:19.376392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:04:19.378659 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:04:19.393531 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 13 00:04:19.419715 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:04:19.431264 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:04:19.470759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:04:19.477254 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:04:19.492198 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:04:19.493712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:04:19.495314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:04:19.497199 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:04:19.505204 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:04:19.514730 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:04:19.519062 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 13 00:04:19.529085 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:04:19.532096 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:04:19.532138 kernel: GPT:9289727 != 19775487 Sep 13 00:04:19.532150 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:04:19.532160 kernel: GPT:9289727 != 19775487 Sep 13 00:04:19.532168 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:04:19.532178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:04:19.531731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:04:19.531840 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:19.539983 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:19.541962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:04:19.542130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:19.544645 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:19.560294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:19.566126 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (511) Sep 13 00:04:19.566160 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (507) Sep 13 00:04:19.574107 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:04:19.578050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:19.587261 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:04:19.591820 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:04:19.595722 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:04:19.596814 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:04:19.613238 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:04:19.614925 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:19.633580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:19.710016 disk-uuid[550]: Primary Header is updated. Sep 13 00:04:19.710016 disk-uuid[550]: Secondary Entries is updated. Sep 13 00:04:19.710016 disk-uuid[550]: Secondary Header is updated. Sep 13 00:04:19.714064 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:04:19.717052 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:04:19.721059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:04:20.726065 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:04:20.726780 disk-uuid[559]: The operation has completed successfully. Sep 13 00:04:20.751962 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:04:20.752078 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:04:20.766221 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:04:20.770088 sh[575]: Success Sep 13 00:04:20.780083 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:04:20.811240 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:04:20.824456 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:04:20.827092 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:04:20.837173 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:04:20.837210 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:04:20.837221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:04:20.839133 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:04:20.839149 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:04:20.846559 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:04:20.847743 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:04:20.860258 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:04:20.861833 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:04:20.874066 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:04:20.874131 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:04:20.875067 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:04:20.879054 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:04:20.887075 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:04:20.889250 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:04:20.897337 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:04:20.905288 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:04:20.966702 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:04:20.976495 ignition[674]: Ignition 2.19.0 Sep 13 00:04:20.976505 ignition[674]: Stage: fetch-offline Sep 13 00:04:20.978267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:04:20.976545 ignition[674]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:20.976553 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:04:20.976738 ignition[674]: parsed url from cmdline: "" Sep 13 00:04:20.976742 ignition[674]: no config URL provided Sep 13 00:04:20.976746 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:04:20.976753 ignition[674]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:04:20.976776 ignition[674]: op(1): [started] loading QEMU firmware config module Sep 13 00:04:20.976781 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:04:20.985868 ignition[674]: op(1): [finished] loading QEMU firmware config module Sep 13 00:04:21.000820 systemd-networkd[768]: lo: Link UP Sep 13 00:04:21.000831 systemd-networkd[768]: lo: Gained carrier Sep 13 00:04:21.001549 systemd-networkd[768]: Enumeration completed Sep 13 00:04:21.001845 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:04:21.003120 systemd[1]: Reached target network.target - Network. Sep 13 00:04:21.005152 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:04:21.005156 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:04:21.006082 systemd-networkd[768]: eth0: Link UP Sep 13 00:04:21.006085 systemd-networkd[768]: eth0: Gained carrier Sep 13 00:04:21.006092 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:04:21.025109 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:04:21.041393 ignition[674]: parsing config with SHA512: 6b5944be446e1ecfcad70b008f5fb1a50f710bc8421f0a3ed1634b772327e58c2bd68ee02bde9705c9e7dc8878786bcf87fef8ba3d9558fdf0f5846dbe714523 Sep 13 00:04:21.047025 unknown[674]: fetched base config from "system" Sep 13 00:04:21.047036 unknown[674]: fetched user config from "qemu" Sep 13 00:04:21.047490 ignition[674]: fetch-offline: fetch-offline passed Sep 13 00:04:21.047559 ignition[674]: Ignition finished successfully Sep 13 00:04:21.049398 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:04:21.051104 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:04:21.064263 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:04:21.075668 ignition[774]: Ignition 2.19.0 Sep 13 00:04:21.075679 ignition[774]: Stage: kargs Sep 13 00:04:21.075872 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:21.075881 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:04:21.076832 ignition[774]: kargs: kargs passed Sep 13 00:04:21.076879 ignition[774]: Ignition finished successfully Sep 13 00:04:21.079337 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:04:21.081706 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:04:21.095575 ignition[783]: Ignition 2.19.0 Sep 13 00:04:21.095585 ignition[783]: Stage: disks Sep 13 00:04:21.095755 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:21.095767 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:04:21.096649 ignition[783]: disks: disks passed Sep 13 00:04:21.098392 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:04:21.096695 ignition[783]: Ignition finished successfully Sep 13 00:04:21.099521 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:04:21.100655 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:04:21.102242 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:04:21.103548 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:04:21.105046 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:04:21.117211 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:04:21.127841 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:04:21.132938 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:04:21.155184 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:04:21.196900 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:04:21.198226 kernel: EXT4-fs (vda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:04:21.198016 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:04:21.209149 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:04:21.210719 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:04:21.211771 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:04:21.211847 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:04:21.211872 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:04:21.218635 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (801) Sep 13 00:04:21.217979 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:04:21.222845 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:04:21.222864 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:04:21.222874 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:04:21.220053 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:04:21.226059 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:04:21.227495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:04:21.259736 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:04:21.262989 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:04:21.267858 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:04:21.272006 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:04:21.346540 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:04:21.360170 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:04:21.361721 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:04:21.368056 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:04:21.381977 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:04:21.384734 ignition[913]: INFO : Ignition 2.19.0 Sep 13 00:04:21.385566 ignition[913]: INFO : Stage: mount Sep 13 00:04:21.386704 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:21.388562 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:04:21.388562 ignition[913]: INFO : mount: mount passed Sep 13 00:04:21.388562 ignition[913]: INFO : Ignition finished successfully Sep 13 00:04:21.389784 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:04:21.400180 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:04:21.835744 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:04:21.846233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:04:21.852070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (928) Sep 13 00:04:21.852114 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:04:21.854190 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:04:21.854207 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:04:21.859060 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:04:21.858278 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:04:21.875128 ignition[945]: INFO : Ignition 2.19.0 Sep 13 00:04:21.875128 ignition[945]: INFO : Stage: files Sep 13 00:04:21.876490 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:21.876490 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:04:21.876490 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:04:21.879251 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:04:21.879251 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:04:21.879251 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:04:21.879251 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:04:21.879251 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:04:21.879242 unknown[945]: wrote ssh authorized keys file for user: core Sep 13 00:04:21.885250 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:04:21.885250 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:04:21.940052 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:04:22.191859 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:04:22.191859 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:22.195268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:04:22.546245 systemd-networkd[768]: eth0: Gained IPv6LL Sep 13 00:04:22.583113 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:04:23.075107 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:04:23.077161 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:04:23.095149 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:04:23.098902 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:04:23.100156 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:04:23.100156 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:23.100156 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:23.100156 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:23.100156 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:23.100156 ignition[945]: INFO : files: files passed Sep 13 00:04:23.100156 ignition[945]: INFO : Ignition finished successfully Sep 13 00:04:23.101571 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:04:23.112200 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:04:23.113782 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:04:23.117352 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:04:23.117432 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:04:23.121300 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:04:23.124890 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:23.124890 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:23.127793 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:23.128692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:04:23.130240 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:04:23.139249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:04:23.158479 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:04:23.158617 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:04:23.160461 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:04:23.161943 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:04:23.163589 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:04:23.179259 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:04:23.190533 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:04:23.192737 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:04:23.204189 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:04:23.205146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:04:23.206940 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:04:23.208454 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:04:23.208573 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:04:23.210802 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:04:23.212631 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:04:23.214005 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:04:23.215450 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:04:23.216975 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:04:23.218767 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:04:23.220318 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:04:23.221928 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:04:23.223742 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:04:23.225154 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:04:23.226431 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:04:23.226552 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:04:23.228565 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:04:23.230151 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:04:23.231802 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:04:23.235142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:04:23.236186 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:04:23.236305 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:04:23.238788 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:04:23.238914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:04:23.240560 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:04:23.241742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:04:23.246131 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:04:23.247211 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:04:23.248840 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:04:23.250164 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:04:23.250251 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:04:23.251544 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:04:23.251622 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:04:23.252854 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:04:23.252964 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:04:23.254279 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:04:23.254373 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:04:23.265292 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:04:23.267453 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:04:23.268131 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:04:23.268250 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:04:23.269762 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:04:23.269855 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:04:23.274252 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:04:23.275017 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:04:23.279117 ignition[999]: INFO : Ignition 2.19.0 Sep 13 00:04:23.279117 ignition[999]: INFO : Stage: umount Sep 13 00:04:23.280697 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:23.280697 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:04:23.280697 ignition[999]: INFO : umount: umount passed Sep 13 00:04:23.280697 ignition[999]: INFO : Ignition finished successfully Sep 13 00:04:23.280156 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:04:23.282180 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:04:23.282300 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:04:23.283807 systemd[1]: Stopped target network.target - Network. Sep 13 00:04:23.284770 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:04:23.284826 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:04:23.286146 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:04:23.286185 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:04:23.287503 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:04:23.287545 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:04:23.288948 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:04:23.288989 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:04:23.290483 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:04:23.291669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:04:23.300598 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:04:23.300710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:04:23.303075 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:04:23.303136 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:04:23.304118 systemd-networkd[768]: eth0: DHCPv6 lease lost Sep 13 00:04:23.306189 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:04:23.306295 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:04:23.307952 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:04:23.308008 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:04:23.319129 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:04:23.320542 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:04:23.320600 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:04:23.322258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:04:23.322296 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:23.323863 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:04:23.323901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:04:23.325410 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:04:23.336372 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:04:23.336581 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:04:23.343552 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:04:23.343653 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:04:23.345188 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:04:23.345227 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:04:23.348672 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:04:23.348822 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:04:23.350816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:04:23.350857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:04:23.351908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:04:23.351937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:04:23.353631 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:04:23.353673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:04:23.355810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:04:23.355857 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:04:23.357970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:04:23.358018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:23.368203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:04:23.368999 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:04:23.369078 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:04:23.370882 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:04:23.370923 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:04:23.372558 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:04:23.372600 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:04:23.374271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:04:23.374312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:23.376185 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:04:23.376269 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:04:23.378266 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:04:23.380096 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:04:23.389862 systemd[1]: Switching root. Sep 13 00:04:23.420940 systemd-journald[237]: Journal stopped Sep 13 00:04:24.126161 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 13 00:04:24.126217 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:04:24.126229 kernel: SELinux: policy capability open_perms=1 Sep 13 00:04:24.126239 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:04:24.126252 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:04:24.126265 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:04:24.126275 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:04:24.126287 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:04:24.126297 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:04:24.126307 kernel: audit: type=1403 audit(1757721863.558:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:04:24.126318 systemd[1]: Successfully loaded SELinux policy in 32.869ms. Sep 13 00:04:24.126334 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.797ms. Sep 13 00:04:24.126348 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:04:24.126359 systemd[1]: Detected virtualization kvm. Sep 13 00:04:24.126370 systemd[1]: Detected architecture arm64. Sep 13 00:04:24.126382 systemd[1]: Detected first boot. Sep 13 00:04:24.126392 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:04:24.126402 zram_generator::config[1045]: No configuration found. Sep 13 00:04:24.126414 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:04:24.126424 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:04:24.126435 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:04:24.126445 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:04:24.126457 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:04:24.126467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:04:24.126479 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:04:24.126490 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:04:24.126501 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:04:24.126512 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:04:24.126522 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:04:24.126533 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:04:24.126543 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:04:24.126554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:04:24.126565 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:04:24.126578 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:04:24.126589 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:04:24.126600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:04:24.126610 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:04:24.126621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:04:24.126631 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:04:24.126642 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:04:24.126652 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:04:24.126665 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:04:24.126676 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:04:24.126686 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:04:24.126697 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:04:24.126712 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:04:24.126723 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:04:24.126733 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:04:24.126745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:04:24.126757 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:04:24.126768 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:04:24.126779 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:04:24.126789 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:04:24.126800 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:04:24.126811 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:04:24.126821 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:04:24.126832 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:04:24.126843 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:04:24.126856 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:04:24.126867 systemd[1]: Reached target machines.target - Containers. Sep 13 00:04:24.126878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:04:24.126889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:04:24.126899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:04:24.126910 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:04:24.126921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:04:24.126931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:04:24.126943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:04:24.126954 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:04:24.126964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:04:24.126975 kernel: fuse: init (API version 7.39) Sep 13 00:04:24.126986 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:04:24.126996 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:04:24.127007 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:04:24.127021 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:04:24.127032 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:04:24.127078 kernel: ACPI: bus type drm_connector registered Sep 13 00:04:24.127089 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:04:24.127100 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:04:24.127111 kernel: loop: module loaded Sep 13 00:04:24.127121 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:04:24.127131 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:04:24.127160 systemd-journald[1116]: Collecting audit messages is disabled. Sep 13 00:04:24.127182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:04:24.127196 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:04:24.127206 systemd-journald[1116]: Journal started Sep 13 00:04:24.127227 systemd-journald[1116]: Runtime Journal (/run/log/journal/aee1f8e229c34139b219ef70c5189f90) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:04:23.930671 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:04:23.943993 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:04:23.944355 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:04:24.128669 systemd[1]: Stopped verity-setup.service. Sep 13 00:04:24.132534 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:04:24.133215 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:04:24.134170 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:04:24.135113 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:04:24.135952 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:04:24.136945 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:04:24.138010 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:04:24.138979 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:04:24.140246 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:04:24.141424 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:04:24.141561 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:04:24.142721 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:24.142859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:04:24.144130 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:04:24.144271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:04:24.145324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:24.145468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:04:24.146625 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:04:24.146754 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:04:24.148027 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:24.148194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:04:24.149260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:04:24.150376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:04:24.151729 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:04:24.163649 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:04:24.173146 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:04:24.174981 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:04:24.175944 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:04:24.175975 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:04:24.177863 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:04:24.179918 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:04:24.181862 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:04:24.182836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:04:24.184169 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:04:24.185795 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:04:24.186813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:24.190246 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:04:24.191685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:04:24.195225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:04:24.198310 systemd-journald[1116]: Time spent on flushing to /var/log/journal/aee1f8e229c34139b219ef70c5189f90 is 13.732ms for 855 entries. Sep 13 00:04:24.198310 systemd-journald[1116]: System Journal (/var/log/journal/aee1f8e229c34139b219ef70c5189f90) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:04:24.222165 systemd-journald[1116]: Received client request to flush runtime journal. Sep 13 00:04:24.222205 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 00:04:24.198143 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:04:24.203336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:04:24.209053 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:04:24.210271 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:04:24.211248 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:04:24.212548 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:04:24.213858 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:04:24.219192 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:04:24.232319 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:04:24.236593 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 13 00:04:24.236608 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 13 00:04:24.242820 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:04:24.242561 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:04:24.244113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:04:24.246915 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:24.251499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:04:24.256641 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:04:24.258096 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:04:24.258645 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:04:24.266094 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:04:24.269136 kernel: loop1: detected capacity change from 0 to 114328 Sep 13 00:04:24.284669 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:04:24.291280 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:04:24.296210 kernel: loop2: detected capacity change from 0 to 114432 Sep 13 00:04:24.305706 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 13 00:04:24.305729 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 13 00:04:24.310364 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:04:24.325823 kernel: loop3: detected capacity change from 0 to 203944 Sep 13 00:04:24.331073 kernel: loop4: detected capacity change from 0 to 114328 Sep 13 00:04:24.335063 kernel: loop5: detected capacity change from 0 to 114432 Sep 13 00:04:24.337723 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:04:24.338169 (sd-merge)[1184]: Merged extensions into '/usr'. Sep 13 00:04:24.342224 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:04:24.342242 systemd[1]: Reloading... Sep 13 00:04:24.390191 zram_generator::config[1207]: No configuration found. Sep 13 00:04:24.444162 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:04:24.487788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:24.532317 systemd[1]: Reloading finished in 189 ms. Sep 13 00:04:24.558702 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:04:24.561450 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:04:24.578219 systemd[1]: Starting ensure-sysext.service... Sep 13 00:04:24.579977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:04:24.585232 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:04:24.585249 systemd[1]: Reloading... Sep 13 00:04:24.597560 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:04:24.598169 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:04:24.598895 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:04:24.599257 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 13 00:04:24.599381 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 13 00:04:24.601871 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:04:24.601973 systemd-tmpfiles[1248]: Skipping /boot Sep 13 00:04:24.609002 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:04:24.609292 systemd-tmpfiles[1248]: Skipping /boot Sep 13 00:04:24.638189 zram_generator::config[1275]: No configuration found. Sep 13 00:04:24.731624 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:24.776644 systemd[1]: Reloading finished in 191 ms. Sep 13 00:04:24.793132 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:04:24.805501 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:04:24.813140 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:04:24.815496 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:04:24.817776 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:04:24.821347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:04:24.827372 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:04:24.832086 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:04:24.840124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:04:24.841382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:04:24.843687 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:04:24.847035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:04:24.848629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:04:24.850776 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:04:24.852715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:24.853280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:04:24.857244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:24.857417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:04:24.859838 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:04:24.863624 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:24.863756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:04:24.868077 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:04:24.876094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:04:24.881534 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Sep 13 00:04:24.886394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:04:24.890286 augenrules[1342]: No rules Sep 13 00:04:24.888912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:04:24.892327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:04:24.893464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:04:24.896518 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:04:24.898127 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:04:24.901088 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:04:24.902219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:04:24.903768 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:04:24.905335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:24.905467 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:04:24.906921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:24.907064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:04:24.908479 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:24.908603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:04:24.909881 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:04:24.920375 systemd[1]: Finished ensure-sysext.service. Sep 13 00:04:24.928940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:04:24.936319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:04:24.942285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:04:24.947288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:04:24.947849 systemd-resolved[1316]: Positive Trust Anchors: Sep 13 00:04:24.947868 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:04:24.947900 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:04:24.950766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:04:24.951909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:04:24.954602 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:04:24.955597 systemd-resolved[1316]: Defaulting to hostname 'linux'. Sep 13 00:04:24.957026 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:04:24.961180 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:04:24.961571 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:04:24.962873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:24.963023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:04:24.964231 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:04:24.964367 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:04:24.965566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:24.965700 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:04:24.967175 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:24.967308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:04:24.971259 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1371) Sep 13 00:04:24.974925 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:04:24.977078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:04:24.978261 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:24.978324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:04:25.013968 systemd-networkd[1386]: lo: Link UP Sep 13 00:04:25.013977 systemd-networkd[1386]: lo: Gained carrier Sep 13 00:04:25.014789 systemd-networkd[1386]: Enumeration completed Sep 13 00:04:25.014891 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:04:25.015258 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:04:25.015266 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:04:25.015841 systemd-networkd[1386]: eth0: Link UP Sep 13 00:04:25.015849 systemd-networkd[1386]: eth0: Gained carrier Sep 13 00:04:25.015862 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:04:25.016081 systemd[1]: Reached target network.target - Network. Sep 13 00:04:25.021221 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:04:25.029285 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:04:25.030664 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:04:25.032225 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:04:25.033248 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:04:25.033334 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Sep 13 00:04:25.034098 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:04:25.034145 systemd-timesyncd[1387]: Initial clock synchronization to Sat 2025-09-13 00:04:24.753310 UTC. Sep 13 00:04:25.041779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:04:25.052250 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:04:25.060365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:25.065101 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:04:25.067948 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:04:25.070412 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:04:25.082903 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:04:25.099647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:25.118560 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:04:25.119964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:04:25.121140 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:04:25.122183 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:04:25.123284 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:04:25.124628 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:04:25.125717 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:04:25.126864 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:04:25.127995 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:04:25.128031 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:04:25.128817 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:04:25.130624 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:04:25.132944 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:04:25.147028 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:04:25.149181 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:04:25.150705 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:04:25.151812 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:04:25.152762 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:04:25.153676 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:04:25.153715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:04:25.154617 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:04:25.156634 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:04:25.157521 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:04:25.161183 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:04:25.163399 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:04:25.165246 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:04:25.167284 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:04:25.168090 jq[1419]: false Sep 13 00:04:25.169098 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:04:25.172021 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:04:25.176267 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:04:25.181200 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:04:25.181966 extend-filesystems[1420]: Found loop3 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found loop4 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found loop5 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda1 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda2 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda3 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found usr Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda4 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda6 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda7 Sep 13 00:04:25.183849 extend-filesystems[1420]: Found vda9 Sep 13 00:04:25.183849 extend-filesystems[1420]: Checking size of /dev/vda9 Sep 13 00:04:25.183078 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:04:25.195123 dbus-daemon[1418]: [system] SELinux support is enabled Sep 13 00:04:25.183509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:04:25.185346 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:04:25.188323 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:04:25.203878 jq[1435]: true Sep 13 00:04:25.192506 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:04:25.196399 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:04:25.204600 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:04:25.204772 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:04:25.205032 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:04:25.205256 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:04:25.206981 extend-filesystems[1420]: Resized partition /dev/vda9 Sep 13 00:04:25.208195 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:04:25.208373 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:04:25.210315 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:04:25.221467 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:04:25.221728 update_engine[1433]: I20250913 00:04:25.221513 1433 main.cc:92] Flatcar Update Engine starting Sep 13 00:04:25.225641 update_engine[1433]: I20250913 00:04:25.225596 1433 update_check_scheduler.cc:74] Next update check in 9m15s Sep 13 00:04:25.227085 tar[1442]: linux-arm64/helm Sep 13 00:04:25.226475 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:04:25.226481 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:04:25.226505 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:04:25.229298 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:04:25.229323 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:04:25.231158 jq[1444]: true Sep 13 00:04:25.231314 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:04:25.243968 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:04:25.255064 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1371) Sep 13 00:04:25.258111 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:04:25.266121 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:04:25.266121 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:04:25.266121 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:04:25.275082 extend-filesystems[1420]: Resized filesystem in /dev/vda9 Sep 13 00:04:25.268451 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:04:25.268660 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:04:25.282763 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:04:25.287681 systemd-logind[1428]: New seat seat0. Sep 13 00:04:25.288641 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:04:25.291468 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:04:25.295064 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:04:25.296619 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:04:25.312229 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:04:25.403501 containerd[1445]: time="2025-09-13T00:04:25.403406960Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:04:25.428376 containerd[1445]: time="2025-09-13T00:04:25.428327720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430004200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430060760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430079640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430238320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430256320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430306880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430318920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430485920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430501880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430514080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431045 containerd[1445]: time="2025-09-13T00:04:25.430523680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431253 containerd[1445]: time="2025-09-13T00:04:25.430595000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431253 containerd[1445]: time="2025-09-13T00:04:25.430770200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431253 containerd[1445]: time="2025-09-13T00:04:25.430873800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:25.431253 containerd[1445]: time="2025-09-13T00:04:25.430888640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:04:25.431253 containerd[1445]: time="2025-09-13T00:04:25.430959920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:04:25.431253 containerd[1445]: time="2025-09-13T00:04:25.430997880Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:04:25.434323 containerd[1445]: time="2025-09-13T00:04:25.434295840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:04:25.434375 containerd[1445]: time="2025-09-13T00:04:25.434340240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:04:25.434375 containerd[1445]: time="2025-09-13T00:04:25.434356760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:04:25.434427 containerd[1445]: time="2025-09-13T00:04:25.434373320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:04:25.434427 containerd[1445]: time="2025-09-13T00:04:25.434388640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:04:25.434566 containerd[1445]: time="2025-09-13T00:04:25.434543440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:04:25.434812 containerd[1445]: time="2025-09-13T00:04:25.434784800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:04:25.434908 containerd[1445]: time="2025-09-13T00:04:25.434890000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:04:25.434939 containerd[1445]: time="2025-09-13T00:04:25.434910440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:04:25.434939 containerd[1445]: time="2025-09-13T00:04:25.434923680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:04:25.434973 containerd[1445]: time="2025-09-13T00:04:25.434936960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.434973 containerd[1445]: time="2025-09-13T00:04:25.434950520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.434973 containerd[1445]: time="2025-09-13T00:04:25.434964600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.435021 containerd[1445]: time="2025-09-13T00:04:25.434979120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.435021 containerd[1445]: time="2025-09-13T00:04:25.434993080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.435021 containerd[1445]: time="2025-09-13T00:04:25.435005480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.435021 containerd[1445]: time="2025-09-13T00:04:25.435017160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.435107 containerd[1445]: time="2025-09-13T00:04:25.435029720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:04:25.435107 containerd[1445]: time="2025-09-13T00:04:25.435069840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435107 containerd[1445]: time="2025-09-13T00:04:25.435085280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435107 containerd[1445]: time="2025-09-13T00:04:25.435098000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435186 containerd[1445]: time="2025-09-13T00:04:25.435110920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435186 containerd[1445]: time="2025-09-13T00:04:25.435123280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435186 containerd[1445]: time="2025-09-13T00:04:25.435136640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435186 containerd[1445]: time="2025-09-13T00:04:25.435149160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435186 containerd[1445]: time="2025-09-13T00:04:25.435161600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435186 containerd[1445]: time="2025-09-13T00:04:25.435173640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435284 containerd[1445]: time="2025-09-13T00:04:25.435187440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435284 containerd[1445]: time="2025-09-13T00:04:25.435200440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435284 containerd[1445]: time="2025-09-13T00:04:25.435212120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435284 containerd[1445]: time="2025-09-13T00:04:25.435223480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435284 containerd[1445]: time="2025-09-13T00:04:25.435239040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:04:25.435284 containerd[1445]: time="2025-09-13T00:04:25.435264880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435284 containerd[1445]: time="2025-09-13T00:04:25.435277200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.435398 containerd[1445]: time="2025-09-13T00:04:25.435288840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.435915920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.435954480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.436118080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.436138480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.436148280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.436166520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.436176640Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:04:25.437046 containerd[1445]: time="2025-09-13T00:04:25.436189200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:04:25.437197 containerd[1445]: time="2025-09-13T00:04:25.436511800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:04:25.437197 containerd[1445]: time="2025-09-13T00:04:25.436569000Z" level=info msg="Connect containerd service" Sep 13 00:04:25.437197 containerd[1445]: time="2025-09-13T00:04:25.436595080Z" level=info msg="using legacy CRI server" Sep 13 00:04:25.437197 containerd[1445]: time="2025-09-13T00:04:25.436603120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:04:25.437197 containerd[1445]: time="2025-09-13T00:04:25.436682840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:04:25.437425 containerd[1445]: time="2025-09-13T00:04:25.437394000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:04:25.437624 containerd[1445]: time="2025-09-13T00:04:25.437593040Z" level=info msg="Start subscribing containerd event" Sep 13 00:04:25.437649 containerd[1445]: time="2025-09-13T00:04:25.437637240Z" level=info msg="Start recovering state" Sep 13 00:04:25.437712 containerd[1445]: time="2025-09-13T00:04:25.437697960Z" level=info msg="Start event monitor" Sep 13 00:04:25.437735 containerd[1445]: time="2025-09-13T00:04:25.437712600Z" level=info msg="Start snapshots syncer" Sep 13 00:04:25.437735 containerd[1445]: time="2025-09-13T00:04:25.437721520Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:04:25.437735 containerd[1445]: time="2025-09-13T00:04:25.437730520Z" level=info msg="Start streaming server" Sep 13 00:04:25.438347 containerd[1445]: time="2025-09-13T00:04:25.438322320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:04:25.438387 containerd[1445]: time="2025-09-13T00:04:25.438369200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:04:25.439529 containerd[1445]: time="2025-09-13T00:04:25.438418880Z" level=info msg="containerd successfully booted in 0.037594s" Sep 13 00:04:25.438502 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:04:25.586403 tar[1442]: linux-arm64/LICENSE Sep 13 00:04:25.587124 tar[1442]: linux-arm64/README.md Sep 13 00:04:25.600136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:04:25.730804 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:04:25.751155 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:04:25.765341 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:04:25.771187 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:04:25.771378 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:04:25.774061 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:04:25.785944 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:04:25.789736 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:04:25.791699 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:04:25.792809 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:04:26.578167 systemd-networkd[1386]: eth0: Gained IPv6LL Sep 13 00:04:26.582089 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:04:26.583454 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:04:26.599350 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:04:26.601628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:26.603630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:04:26.617767 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:04:26.617988 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:04:26.619463 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:04:26.620548 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:04:27.154897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:27.156460 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:04:27.159364 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:04:27.161115 systemd[1]: Startup finished in 551ms (kernel) + 4.874s (initrd) + 3.636s (userspace) = 9.062s. Sep 13 00:04:27.533952 kubelet[1531]: E0913 00:04:27.533838 1531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:27.536144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:27.536289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:31.352796 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:04:31.353942 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:42912.service - OpenSSH per-connection server daemon (10.0.0.1:42912). Sep 13 00:04:31.401661 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 42912 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:04:31.403250 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:31.410911 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:04:31.420529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:04:31.423140 systemd-logind[1428]: New session 1 of user core. Sep 13 00:04:31.431121 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:04:31.433584 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:04:31.440782 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:31.518890 systemd[1548]: Queued start job for default target default.target. Sep 13 00:04:31.529986 systemd[1548]: Created slice app.slice - User Application Slice. Sep 13 00:04:31.530015 systemd[1548]: Reached target paths.target - Paths. Sep 13 00:04:31.530035 systemd[1548]: Reached target timers.target - Timers. Sep 13 00:04:31.531263 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:04:31.541397 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:04:31.541463 systemd[1548]: Reached target sockets.target - Sockets. Sep 13 00:04:31.541475 systemd[1548]: Reached target basic.target - Basic System. Sep 13 00:04:31.541512 systemd[1548]: Reached target default.target - Main User Target. Sep 13 00:04:31.541536 systemd[1548]: Startup finished in 94ms. Sep 13 00:04:31.541729 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:04:31.543136 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:04:31.603350 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:42920.service - OpenSSH per-connection server daemon (10.0.0.1:42920). Sep 13 00:04:31.638277 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 42920 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:04:31.639673 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:31.645511 systemd-logind[1428]: New session 2 of user core. Sep 13 00:04:31.659241 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:04:31.710699 sshd[1559]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:31.720515 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:42920.service: Deactivated successfully. Sep 13 00:04:31.722164 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:04:31.723364 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:04:31.724566 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:42934.service - OpenSSH per-connection server daemon (10.0.0.1:42934). Sep 13 00:04:31.725300 systemd-logind[1428]: Removed session 2. Sep 13 00:04:31.759840 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 42934 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:04:31.761148 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:31.765228 systemd-logind[1428]: New session 3 of user core. Sep 13 00:04:31.779234 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:04:31.826882 sshd[1566]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:31.836651 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:42934.service: Deactivated successfully. Sep 13 00:04:31.838191 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:04:31.839406 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:04:31.841096 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:42942.service - OpenSSH per-connection server daemon (10.0.0.1:42942). Sep 13 00:04:31.842223 systemd-logind[1428]: Removed session 3. Sep 13 00:04:31.876140 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 42942 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:04:31.877345 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:31.882032 systemd-logind[1428]: New session 4 of user core. Sep 13 00:04:31.896251 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:04:31.950324 sshd[1573]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:31.964656 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:42942.service: Deactivated successfully. Sep 13 00:04:31.966152 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:04:31.968207 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:04:31.968695 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:42956.service - OpenSSH per-connection server daemon (10.0.0.1:42956). Sep 13 00:04:31.969409 systemd-logind[1428]: Removed session 4. Sep 13 00:04:32.009467 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 42956 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:04:32.015465 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:32.036572 systemd-logind[1428]: New session 5 of user core. Sep 13 00:04:32.045236 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:04:32.100518 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:04:32.100794 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:04:32.119887 sudo[1583]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:32.122119 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:32.136694 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:42956.service: Deactivated successfully. Sep 13 00:04:32.140337 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:04:32.141575 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:04:32.142839 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:42958.service - OpenSSH per-connection server daemon (10.0.0.1:42958). Sep 13 00:04:32.143946 systemd-logind[1428]: Removed session 5. Sep 13 00:04:32.186313 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 42958 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:04:32.187736 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:32.195653 systemd-logind[1428]: New session 6 of user core. Sep 13 00:04:32.201220 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:04:32.257531 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:04:32.257798 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:04:32.268401 sudo[1592]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:32.273340 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:04:32.273942 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:04:32.292300 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:04:32.293427 auditctl[1595]: No rules Sep 13 00:04:32.293940 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:04:32.294125 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:04:32.296220 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:04:32.327091 augenrules[1613]: No rules Sep 13 00:04:32.329127 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:04:32.330654 sudo[1591]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:32.332177 sshd[1588]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:32.342353 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:42958.service: Deactivated successfully. Sep 13 00:04:32.344422 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:04:32.345735 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:04:32.359368 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:42970.service - OpenSSH per-connection server daemon (10.0.0.1:42970). Sep 13 00:04:32.360032 systemd-logind[1428]: Removed session 6. Sep 13 00:04:32.395724 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 42970 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:04:32.397602 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:32.401249 systemd-logind[1428]: New session 7 of user core. Sep 13 00:04:32.410191 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:04:32.463578 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:04:32.463851 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:04:32.740395 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:04:32.740719 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:04:32.959956 dockerd[1643]: time="2025-09-13T00:04:32.959873473Z" level=info msg="Starting up" Sep 13 00:04:33.134114 dockerd[1643]: time="2025-09-13T00:04:33.133983268Z" level=info msg="Loading containers: start." Sep 13 00:04:33.225088 kernel: Initializing XFRM netlink socket Sep 13 00:04:33.296332 systemd-networkd[1386]: docker0: Link UP Sep 13 00:04:33.315228 dockerd[1643]: time="2025-09-13T00:04:33.315162096Z" level=info msg="Loading containers: done." Sep 13 00:04:33.327798 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1750156100-merged.mount: Deactivated successfully. Sep 13 00:04:33.331144 dockerd[1643]: time="2025-09-13T00:04:33.331089659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:04:33.331247 dockerd[1643]: time="2025-09-13T00:04:33.331190924Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:04:33.331312 dockerd[1643]: time="2025-09-13T00:04:33.331293806Z" level=info msg="Daemon has completed initialization" Sep 13 00:04:33.362173 dockerd[1643]: time="2025-09-13T00:04:33.362015526Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:04:33.362289 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:04:33.903111 containerd[1445]: time="2025-09-13T00:04:33.902777138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:04:34.436513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050228878.mount: Deactivated successfully. Sep 13 00:04:35.497702 containerd[1445]: time="2025-09-13T00:04:35.497642865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:35.498900 containerd[1445]: time="2025-09-13T00:04:35.498874019Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 13 00:04:35.499953 containerd[1445]: time="2025-09-13T00:04:35.499907956Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:35.503211 containerd[1445]: time="2025-09-13T00:04:35.503179992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:35.504320 containerd[1445]: time="2025-09-13T00:04:35.504292135Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.601467235s" Sep 13 00:04:35.504581 containerd[1445]: time="2025-09-13T00:04:35.504398718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:04:35.505918 containerd[1445]: time="2025-09-13T00:04:35.505882895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:04:36.649292 containerd[1445]: time="2025-09-13T00:04:36.646570286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:36.650209 containerd[1445]: time="2025-09-13T00:04:36.650172804Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 13 00:04:36.651134 containerd[1445]: time="2025-09-13T00:04:36.651107456Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:36.654300 containerd[1445]: time="2025-09-13T00:04:36.654243124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:36.655252 containerd[1445]: time="2025-09-13T00:04:36.655222559Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.149307987s" Sep 13 00:04:36.655306 containerd[1445]: time="2025-09-13T00:04:36.655258623Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:04:36.655823 containerd[1445]: time="2025-09-13T00:04:36.655805132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:04:37.757377 containerd[1445]: time="2025-09-13T00:04:37.757227494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:37.758860 containerd[1445]: time="2025-09-13T00:04:37.758823261Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 13 00:04:37.759724 containerd[1445]: time="2025-09-13T00:04:37.759673696Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:37.764139 containerd[1445]: time="2025-09-13T00:04:37.764105968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:37.765392 containerd[1445]: time="2025-09-13T00:04:37.765289015Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.109452719s" Sep 13 00:04:37.765392 containerd[1445]: time="2025-09-13T00:04:37.765329208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:04:37.767105 containerd[1445]: time="2025-09-13T00:04:37.765819456Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:04:37.786675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:04:37.801538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:37.941099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:37.948756 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:04:38.001179 kubelet[1865]: E0913 00:04:38.000442 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:38.004456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:38.004604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:38.923001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868104871.mount: Deactivated successfully. Sep 13 00:04:39.291438 containerd[1445]: time="2025-09-13T00:04:39.291309876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:39.292800 containerd[1445]: time="2025-09-13T00:04:39.292764664Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 13 00:04:39.293750 containerd[1445]: time="2025-09-13T00:04:39.293697659Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:39.296663 containerd[1445]: time="2025-09-13T00:04:39.296628742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:39.298111 containerd[1445]: time="2025-09-13T00:04:39.297312527Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.531462108s" Sep 13 00:04:39.298111 containerd[1445]: time="2025-09-13T00:04:39.297347112Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:04:39.298545 containerd[1445]: time="2025-09-13T00:04:39.298509003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:04:39.954955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205036745.mount: Deactivated successfully. Sep 13 00:04:40.694510 containerd[1445]: time="2025-09-13T00:04:40.694443627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:40.694974 containerd[1445]: time="2025-09-13T00:04:40.694940405Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 13 00:04:40.696083 containerd[1445]: time="2025-09-13T00:04:40.696025701Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:40.699226 containerd[1445]: time="2025-09-13T00:04:40.699170751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:40.700651 containerd[1445]: time="2025-09-13T00:04:40.700621101Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.402075002s" Sep 13 00:04:40.700721 containerd[1445]: time="2025-09-13T00:04:40.700657543Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:04:40.701369 containerd[1445]: time="2025-09-13T00:04:40.701167569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:04:41.203532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743767185.mount: Deactivated successfully. Sep 13 00:04:41.211577 containerd[1445]: time="2025-09-13T00:04:41.211516495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:41.213110 containerd[1445]: time="2025-09-13T00:04:41.213066482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 13 00:04:41.214141 containerd[1445]: time="2025-09-13T00:04:41.214092760Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:41.217088 containerd[1445]: time="2025-09-13T00:04:41.216970551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:41.218171 containerd[1445]: time="2025-09-13T00:04:41.218142377Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.938122ms" Sep 13 00:04:41.218298 containerd[1445]: time="2025-09-13T00:04:41.218175340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:04:41.218614 containerd[1445]: time="2025-09-13T00:04:41.218594307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:04:41.782891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633264438.mount: Deactivated successfully. Sep 13 00:04:43.461734 containerd[1445]: time="2025-09-13T00:04:43.461659659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:43.462230 containerd[1445]: time="2025-09-13T00:04:43.462195270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 13 00:04:43.463434 containerd[1445]: time="2025-09-13T00:04:43.463396700Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:43.466495 containerd[1445]: time="2025-09-13T00:04:43.466445807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:43.468119 containerd[1445]: time="2025-09-13T00:04:43.468093214Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.249471508s" Sep 13 00:04:43.468310 containerd[1445]: time="2025-09-13T00:04:43.468205047Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:04:47.588953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:47.606258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:47.626880 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit session-7.scope)... Sep 13 00:04:47.627056 systemd[1]: Reloading... Sep 13 00:04:47.697074 zram_generator::config[2056]: No configuration found. Sep 13 00:04:47.789763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:47.854681 systemd[1]: Reloading finished in 227 ms. Sep 13 00:04:47.900970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:47.904748 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:04:47.904984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:47.906714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:48.006826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:48.011377 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:04:48.043199 kubelet[2106]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:48.043199 kubelet[2106]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:48.043199 kubelet[2106]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:48.043519 kubelet[2106]: I0913 00:04:48.043240 2106 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:48.453594 kubelet[2106]: I0913 00:04:48.453546 2106 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:04:48.453594 kubelet[2106]: I0913 00:04:48.453580 2106 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:48.453836 kubelet[2106]: I0913 00:04:48.453810 2106 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:04:48.475062 kubelet[2106]: E0913 00:04:48.475017 2106 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:48.476051 kubelet[2106]: I0913 00:04:48.475996 2106 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:48.701383 kubelet[2106]: E0913 00:04:48.701274 2106 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:48.701383 kubelet[2106]: I0913 00:04:48.701307 2106 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:48.704847 kubelet[2106]: I0913 00:04:48.704749 2106 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:48.705606 kubelet[2106]: I0913 00:04:48.705564 2106 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:04:48.705740 kubelet[2106]: I0913 00:04:48.705715 2106 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:48.705915 kubelet[2106]: I0913 00:04:48.705740 2106 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:04:48.706073 kubelet[2106]: I0913 00:04:48.706062 2106 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:48.706108 kubelet[2106]: I0913 00:04:48.706074 2106 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:04:48.706350 kubelet[2106]: I0913 00:04:48.706323 2106 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:48.708660 kubelet[2106]: I0913 00:04:48.708252 2106 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:04:48.708660 kubelet[2106]: I0913 00:04:48.708284 2106 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:48.708660 kubelet[2106]: I0913 00:04:48.708304 2106 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:04:48.708660 kubelet[2106]: I0913 00:04:48.708314 2106 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:48.712600 kubelet[2106]: I0913 00:04:48.712523 2106 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:04:48.713224 kubelet[2106]: W0913 00:04:48.713175 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:04:48.713308 kubelet[2106]: E0913 00:04:48.713272 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:48.713379 kubelet[2106]: W0913 00:04:48.713177 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:04:48.713464 kubelet[2106]: E0913 00:04:48.713445 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:48.713530 kubelet[2106]: I0913 00:04:48.713514 2106 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:04:48.713801 kubelet[2106]: W0913 00:04:48.713771 2106 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:04:48.714722 kubelet[2106]: I0913 00:04:48.714707 2106 server.go:1274] "Started kubelet" Sep 13 00:04:48.715025 kubelet[2106]: I0913 00:04:48.714952 2106 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:48.716263 kubelet[2106]: I0913 00:04:48.716186 2106 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:48.716604 kubelet[2106]: I0913 00:04:48.716558 2106 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:48.717993 kubelet[2106]: I0913 00:04:48.717843 2106 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:04:48.719424 kubelet[2106]: I0913 00:04:48.719324 2106 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:48.719620 kubelet[2106]: I0913 00:04:48.719550 2106 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:48.721488 kubelet[2106]: E0913 00:04:48.719326 2106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aebeb6c483cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:04:48.714687437 +0000 UTC m=+0.700383712,LastTimestamp:2025-09-13 00:04:48.714687437 +0000 UTC m=+0.700383712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:04:48.721488 kubelet[2106]: I0913 00:04:48.720743 2106 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:04:48.721488 kubelet[2106]: I0913 00:04:48.720775 2106 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:04:48.721488 kubelet[2106]: I0913 00:04:48.720899 2106 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:48.721488 kubelet[2106]: E0913 00:04:48.721347 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:48.721488 kubelet[2106]: E0913 00:04:48.721439 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="200ms" Sep 13 00:04:48.721806 kubelet[2106]: W0913 00:04:48.721750 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:04:48.721851 kubelet[2106]: E0913 00:04:48.721817 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:48.722088 kubelet[2106]: E0913 00:04:48.722065 2106 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:48.722271 kubelet[2106]: I0913 00:04:48.722251 2106 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:04:48.722355 kubelet[2106]: I0913 00:04:48.722336 2106 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:48.723649 kubelet[2106]: I0913 00:04:48.723621 2106 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:04:48.736074 kubelet[2106]: I0913 00:04:48.736030 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:48.737456 kubelet[2106]: I0913 00:04:48.737426 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:48.737456 kubelet[2106]: I0913 00:04:48.737452 2106 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:04:48.737563 kubelet[2106]: I0913 00:04:48.737469 2106 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:04:48.737563 kubelet[2106]: E0913 00:04:48.737509 2106 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:48.738128 kubelet[2106]: I0913 00:04:48.738106 2106 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:04:48.738214 kubelet[2106]: I0913 00:04:48.738203 2106 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:48.738270 kubelet[2106]: I0913 00:04:48.738262 2106 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:48.739263 kubelet[2106]: W0913 00:04:48.739219 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:04:48.739662 kubelet[2106]: E0913 00:04:48.739273 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:48.741136 kubelet[2106]: I0913 00:04:48.741106 2106 policy_none.go:49] "None policy: Start" Sep 13 00:04:48.741725 kubelet[2106]: I0913 00:04:48.741709 2106 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:04:48.741781 kubelet[2106]: I0913 00:04:48.741734 2106 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:48.747493 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:04:48.761576 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:04:48.764656 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:04:48.774766 kubelet[2106]: I0913 00:04:48.774728 2106 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:04:48.774937 kubelet[2106]: I0913 00:04:48.774921 2106 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:48.774990 kubelet[2106]: I0913 00:04:48.774935 2106 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:48.775547 kubelet[2106]: I0913 00:04:48.775204 2106 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:48.776467 kubelet[2106]: E0913 00:04:48.776267 2106 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:04:48.845460 systemd[1]: Created slice kubepods-burstable-pod7490151d30ac4c04116cb84544107a57.slice - libcontainer container kubepods-burstable-pod7490151d30ac4c04116cb84544107a57.slice. Sep 13 00:04:48.867181 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 00:04:48.870134 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 00:04:48.876561 kubelet[2106]: I0913 00:04:48.876526 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:48.877006 kubelet[2106]: E0913 00:04:48.876982 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 13 00:04:48.922293 kubelet[2106]: I0913 00:04:48.922241 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:48.922293 kubelet[2106]: I0913 00:04:48.922281 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:48.922293 kubelet[2106]: I0913 00:04:48.922303 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:04:48.922463 kubelet[2106]: I0913 00:04:48.922320 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7490151d30ac4c04116cb84544107a57-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7490151d30ac4c04116cb84544107a57\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:48.922463 kubelet[2106]: I0913 00:04:48.922337 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7490151d30ac4c04116cb84544107a57-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7490151d30ac4c04116cb84544107a57\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:48.922463 kubelet[2106]: I0913 00:04:48.922357 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:48.922463 kubelet[2106]: I0913 00:04:48.922383 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:48.922463 kubelet[2106]: I0913 00:04:48.922399 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7490151d30ac4c04116cb84544107a57-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7490151d30ac4c04116cb84544107a57\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:48.922575 kubelet[2106]: I0913 00:04:48.922421 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:48.922575 kubelet[2106]: E0913 00:04:48.922462 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="400ms" Sep 13 00:04:49.078364 kubelet[2106]: I0913 00:04:49.078252 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:49.078758 kubelet[2106]: E0913 00:04:49.078733 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 13 00:04:49.165391 kubelet[2106]: E0913 00:04:49.165344 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:49.166085 containerd[1445]: time="2025-09-13T00:04:49.165938266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7490151d30ac4c04116cb84544107a57,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:49.169754 kubelet[2106]: E0913 00:04:49.169299 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:49.169826 containerd[1445]: time="2025-09-13T00:04:49.169680488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:49.172524 kubelet[2106]: E0913 00:04:49.172476 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:49.172919 containerd[1445]: time="2025-09-13T00:04:49.172886584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:49.323158 kubelet[2106]: E0913 00:04:49.323111 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="800ms" Sep 13 00:04:49.480994 kubelet[2106]: I0913 00:04:49.480864 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:49.481507 kubelet[2106]: E0913 00:04:49.481478 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 13 00:04:49.558334 kubelet[2106]: W0913 00:04:49.558293 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:04:49.558435 kubelet[2106]: E0913 00:04:49.558346 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:49.570958 kubelet[2106]: W0913 00:04:49.570906 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:04:49.571052 kubelet[2106]: E0913 00:04:49.570960 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:49.677963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291524399.mount: Deactivated successfully. Sep 13 00:04:49.683009 containerd[1445]: time="2025-09-13T00:04:49.682960435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:49.683826 containerd[1445]: time="2025-09-13T00:04:49.683795515Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:49.685060 containerd[1445]: time="2025-09-13T00:04:49.684627399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:04:49.688084 containerd[1445]: time="2025-09-13T00:04:49.688004057Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:49.689841 containerd[1445]: time="2025-09-13T00:04:49.689692147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:04:49.689841 containerd[1445]: time="2025-09-13T00:04:49.689797495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:49.692078 containerd[1445]: time="2025-09-13T00:04:49.690681894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 13 00:04:49.692747 containerd[1445]: time="2025-09-13T00:04:49.692717737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:49.693795 containerd[1445]: time="2025-09-13T00:04:49.693763912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.812154ms" Sep 13 00:04:49.695203 containerd[1445]: time="2025-09-13T00:04:49.695175333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.428393ms" Sep 13 00:04:49.697478 containerd[1445]: time="2025-09-13T00:04:49.697401825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.373107ms" Sep 13 00:04:49.791843 containerd[1445]: time="2025-09-13T00:04:49.791306817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:49.791843 containerd[1445]: time="2025-09-13T00:04:49.791364124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:49.791843 containerd[1445]: time="2025-09-13T00:04:49.791403140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:49.791843 containerd[1445]: time="2025-09-13T00:04:49.791672461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:49.791843 containerd[1445]: time="2025-09-13T00:04:49.791720503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:49.791843 containerd[1445]: time="2025-09-13T00:04:49.791735958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:49.792552 containerd[1445]: time="2025-09-13T00:04:49.792459938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:49.792552 containerd[1445]: time="2025-09-13T00:04:49.792530743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:49.792638 containerd[1445]: time="2025-09-13T00:04:49.792554584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:49.793721 containerd[1445]: time="2025-09-13T00:04:49.793546288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:49.794221 containerd[1445]: time="2025-09-13T00:04:49.793855664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:49.794585 containerd[1445]: time="2025-09-13T00:04:49.794388955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:49.818220 systemd[1]: Started cri-containerd-24323af7bac8e2e0ef18cdbd00df32ccb570cf0fdece61709e4dbc0fe70cc172.scope - libcontainer container 24323af7bac8e2e0ef18cdbd00df32ccb570cf0fdece61709e4dbc0fe70cc172. Sep 13 00:04:49.819294 systemd[1]: Started cri-containerd-86e32069f4f024c53011fb26aaee0f9b87fb146cdf063b85c62ddd88cc2af329.scope - libcontainer container 86e32069f4f024c53011fb26aaee0f9b87fb146cdf063b85c62ddd88cc2af329. Sep 13 00:04:49.820329 systemd[1]: Started cri-containerd-9df13431354fcfd4c3c82da4b269a1ad1aec55e484ebf2c833ab1d8f09f731f6.scope - libcontainer container 9df13431354fcfd4c3c82da4b269a1ad1aec55e484ebf2c833ab1d8f09f731f6. Sep 13 00:04:49.850602 containerd[1445]: time="2025-09-13T00:04:49.850468300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7490151d30ac4c04116cb84544107a57,Namespace:kube-system,Attempt:0,} returns sandbox id \"24323af7bac8e2e0ef18cdbd00df32ccb570cf0fdece61709e4dbc0fe70cc172\"" Sep 13 00:04:49.851680 kubelet[2106]: E0913 00:04:49.851649 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:49.855489 containerd[1445]: time="2025-09-13T00:04:49.855450222Z" level=info msg="CreateContainer within sandbox \"24323af7bac8e2e0ef18cdbd00df32ccb570cf0fdece61709e4dbc0fe70cc172\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:04:49.859014 containerd[1445]: time="2025-09-13T00:04:49.858976716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"86e32069f4f024c53011fb26aaee0f9b87fb146cdf063b85c62ddd88cc2af329\"" Sep 13 00:04:49.859579 kubelet[2106]: E0913 00:04:49.859558 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:49.861009 containerd[1445]: time="2025-09-13T00:04:49.860983686Z" level=info msg="CreateContainer within sandbox \"86e32069f4f024c53011fb26aaee0f9b87fb146cdf063b85c62ddd88cc2af329\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:04:49.865776 containerd[1445]: time="2025-09-13T00:04:49.865744369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9df13431354fcfd4c3c82da4b269a1ad1aec55e484ebf2c833ab1d8f09f731f6\"" Sep 13 00:04:49.866606 kubelet[2106]: E0913 00:04:49.866540 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:49.869376 containerd[1445]: time="2025-09-13T00:04:49.869327132Z" level=info msg="CreateContainer within sandbox \"9df13431354fcfd4c3c82da4b269a1ad1aec55e484ebf2c833ab1d8f09f731f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:04:49.875232 containerd[1445]: time="2025-09-13T00:04:49.875194531Z" level=info msg="CreateContainer within sandbox \"24323af7bac8e2e0ef18cdbd00df32ccb570cf0fdece61709e4dbc0fe70cc172\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b248dbe9076b59b3260af12f8205436ffadc8918d4756ea8fd2cced87f48d97a\"" Sep 13 00:04:49.875772 containerd[1445]: time="2025-09-13T00:04:49.875749028Z" level=info msg="StartContainer for \"b248dbe9076b59b3260af12f8205436ffadc8918d4756ea8fd2cced87f48d97a\"" Sep 13 00:04:49.878775 containerd[1445]: time="2025-09-13T00:04:49.878697663Z" level=info msg="CreateContainer within sandbox \"86e32069f4f024c53011fb26aaee0f9b87fb146cdf063b85c62ddd88cc2af329\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82becee11b6889a348cc9120f6f6bfc6a410e674eca0613911261c991a7d2f32\"" Sep 13 00:04:49.881449 containerd[1445]: time="2025-09-13T00:04:49.881081060Z" level=info msg="StartContainer for \"82becee11b6889a348cc9120f6f6bfc6a410e674eca0613911261c991a7d2f32\"" Sep 13 00:04:49.885485 containerd[1445]: time="2025-09-13T00:04:49.885449702Z" level=info msg="CreateContainer within sandbox \"9df13431354fcfd4c3c82da4b269a1ad1aec55e484ebf2c833ab1d8f09f731f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47c3e984249212e300a7928f8db6707b15fcdc7be089e5630d59338bdd0c1cea\"" Sep 13 00:04:49.886225 containerd[1445]: time="2025-09-13T00:04:49.886201836Z" level=info msg="StartContainer for \"47c3e984249212e300a7928f8db6707b15fcdc7be089e5630d59338bdd0c1cea\"" Sep 13 00:04:49.906236 systemd[1]: Started cri-containerd-b248dbe9076b59b3260af12f8205436ffadc8918d4756ea8fd2cced87f48d97a.scope - libcontainer container b248dbe9076b59b3260af12f8205436ffadc8918d4756ea8fd2cced87f48d97a. Sep 13 00:04:49.910714 systemd[1]: Started cri-containerd-47c3e984249212e300a7928f8db6707b15fcdc7be089e5630d59338bdd0c1cea.scope - libcontainer container 47c3e984249212e300a7928f8db6707b15fcdc7be089e5630d59338bdd0c1cea. Sep 13 00:04:49.911605 systemd[1]: Started cri-containerd-82becee11b6889a348cc9120f6f6bfc6a410e674eca0613911261c991a7d2f32.scope - libcontainer container 82becee11b6889a348cc9120f6f6bfc6a410e674eca0613911261c991a7d2f32. Sep 13 00:04:49.942550 containerd[1445]: time="2025-09-13T00:04:49.942379820Z" level=info msg="StartContainer for \"47c3e984249212e300a7928f8db6707b15fcdc7be089e5630d59338bdd0c1cea\" returns successfully" Sep 13 00:04:49.948241 containerd[1445]: time="2025-09-13T00:04:49.947843398Z" level=info msg="StartContainer for \"82becee11b6889a348cc9120f6f6bfc6a410e674eca0613911261c991a7d2f32\" returns successfully" Sep 13 00:04:49.951341 containerd[1445]: time="2025-09-13T00:04:49.951273769Z" level=info msg="StartContainer for \"b248dbe9076b59b3260af12f8205436ffadc8918d4756ea8fd2cced87f48d97a\" returns successfully" Sep 13 00:04:50.283924 kubelet[2106]: I0913 00:04:50.283634 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:50.748618 kubelet[2106]: E0913 00:04:50.748587 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:50.752312 kubelet[2106]: E0913 00:04:50.752142 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:50.754082 kubelet[2106]: E0913 00:04:50.754058 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:51.758112 kubelet[2106]: E0913 00:04:51.758053 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:51.758457 kubelet[2106]: E0913 00:04:51.758389 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:52.088232 kubelet[2106]: E0913 00:04:52.088107 2106 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:04:52.181997 kubelet[2106]: I0913 00:04:52.181960 2106 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:04:52.181997 kubelet[2106]: E0913 00:04:52.181995 2106 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:04:52.711289 kubelet[2106]: I0913 00:04:52.711201 2106 apiserver.go:52] "Watching apiserver" Sep 13 00:04:52.724362 kubelet[2106]: I0913 00:04:52.724316 2106 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:04:53.032279 kubelet[2106]: E0913 00:04:53.031879 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:53.760243 kubelet[2106]: E0913 00:04:53.760175 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:54.459019 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-7.scope)... Sep 13 00:04:54.459051 systemd[1]: Reloading... Sep 13 00:04:54.539081 zram_generator::config[2423]: No configuration found. Sep 13 00:04:54.638514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:54.720082 systemd[1]: Reloading finished in 260 ms. Sep 13 00:04:54.757224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:54.770113 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:04:54.770360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:54.778606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:54.893774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:54.898879 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:04:54.938533 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:54.938533 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:54.938533 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:54.938915 kubelet[2465]: I0913 00:04:54.938596 2465 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:54.947113 kubelet[2465]: I0913 00:04:54.946184 2465 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:04:54.947113 kubelet[2465]: I0913 00:04:54.946212 2465 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:54.947113 kubelet[2465]: I0913 00:04:54.946449 2465 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:04:54.947888 kubelet[2465]: I0913 00:04:54.947853 2465 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:04:54.950714 kubelet[2465]: I0913 00:04:54.950325 2465 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:54.955700 kubelet[2465]: E0913 00:04:54.955660 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:54.955700 kubelet[2465]: I0913 00:04:54.955693 2465 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:54.958123 kubelet[2465]: I0913 00:04:54.958098 2465 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:54.958370 kubelet[2465]: I0913 00:04:54.958213 2465 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:04:54.958370 kubelet[2465]: I0913 00:04:54.958315 2465 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:54.958689 kubelet[2465]: I0913 00:04:54.958339 2465 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:04:54.958689 kubelet[2465]: I0913 00:04:54.958526 2465 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:54.958689 kubelet[2465]: I0913 00:04:54.958535 2465 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:04:54.958689 kubelet[2465]: I0913 00:04:54.958565 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:54.958689 kubelet[2465]: I0913 00:04:54.958689 2465 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:04:54.959944 kubelet[2465]: I0913 00:04:54.958700 2465 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:54.959944 kubelet[2465]: I0913 00:04:54.958723 2465 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:04:54.959944 kubelet[2465]: I0913 00:04:54.958734 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:54.961277 kubelet[2465]: I0913 00:04:54.960824 2465 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:04:54.961619 kubelet[2465]: I0913 00:04:54.961597 2465 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:04:54.962496 kubelet[2465]: I0913 00:04:54.962470 2465 server.go:1274] "Started kubelet" Sep 13 00:04:54.963353 kubelet[2465]: I0913 00:04:54.963305 2465 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:54.963635 kubelet[2465]: I0913 00:04:54.963584 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:54.963904 kubelet[2465]: I0913 00:04:54.963884 2465 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:54.966088 kubelet[2465]: I0913 00:04:54.964796 2465 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:04:54.966088 kubelet[2465]: I0913 00:04:54.965624 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:54.968204 kubelet[2465]: I0913 00:04:54.967870 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:54.969297 kubelet[2465]: I0913 00:04:54.969279 2465 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:04:54.969494 kubelet[2465]: E0913 00:04:54.969478 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:04:54.970174 kubelet[2465]: I0913 00:04:54.970151 2465 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:04:54.970427 kubelet[2465]: I0913 00:04:54.970370 2465 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:54.973555 kubelet[2465]: I0913 00:04:54.973528 2465 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:04:54.979962 kubelet[2465]: I0913 00:04:54.979922 2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:54.987108 kubelet[2465]: I0913 00:04:54.986942 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:54.987959 kubelet[2465]: I0913 00:04:54.987921 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:54.987959 kubelet[2465]: I0913 00:04:54.987950 2465 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:04:54.988068 kubelet[2465]: I0913 00:04:54.987970 2465 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:04:54.988068 kubelet[2465]: E0913 00:04:54.988033 2465 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:54.996556 kubelet[2465]: I0913 00:04:54.995376 2465 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:04:55.000422 kubelet[2465]: E0913 00:04:55.000375 2465 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:55.032509 kubelet[2465]: I0913 00:04:55.032480 2465 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:04:55.032509 kubelet[2465]: I0913 00:04:55.032497 2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:55.032509 kubelet[2465]: I0913 00:04:55.032517 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:55.032688 kubelet[2465]: I0913 00:04:55.032667 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:04:55.032717 kubelet[2465]: I0913 00:04:55.032678 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:04:55.032717 kubelet[2465]: I0913 00:04:55.032696 2465 policy_none.go:49] "None policy: Start" Sep 13 00:04:55.033327 kubelet[2465]: I0913 00:04:55.033314 2465 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:04:55.033395 kubelet[2465]: I0913 00:04:55.033333 2465 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:55.033514 kubelet[2465]: I0913 00:04:55.033501 2465 state_mem.go:75] "Updated machine memory state" Sep 13 00:04:55.036886 kubelet[2465]: I0913 00:04:55.036864 2465 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:04:55.037056 kubelet[2465]: I0913 00:04:55.037027 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:55.037461 kubelet[2465]: I0913 00:04:55.037346 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:55.037578 kubelet[2465]: I0913 00:04:55.037564 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:55.102794 kubelet[2465]: E0913 00:04:55.102701 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:04:55.142308 kubelet[2465]: I0913 00:04:55.142224 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:04:55.153180 kubelet[2465]: I0913 00:04:55.153147 2465 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:04:55.153303 kubelet[2465]: I0913 00:04:55.153234 2465 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:04:55.172563 kubelet[2465]: I0913 00:04:55.172317 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:55.172563 kubelet[2465]: I0913 00:04:55.172377 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:04:55.172563 kubelet[2465]: I0913 00:04:55.172397 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7490151d30ac4c04116cb84544107a57-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7490151d30ac4c04116cb84544107a57\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:55.172563 kubelet[2465]: I0913 00:04:55.172420 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7490151d30ac4c04116cb84544107a57-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7490151d30ac4c04116cb84544107a57\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:55.172563 kubelet[2465]: I0913 00:04:55.172436 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7490151d30ac4c04116cb84544107a57-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7490151d30ac4c04116cb84544107a57\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:55.172768 kubelet[2465]: I0913 00:04:55.172453 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:55.172768 kubelet[2465]: I0913 00:04:55.172467 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:55.172768 kubelet[2465]: I0913 00:04:55.172487 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:55.172768 kubelet[2465]: I0913 00:04:55.172507 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:04:55.398829 kubelet[2465]: E0913 00:04:55.398795 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:55.403188 kubelet[2465]: E0913 00:04:55.402953 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:55.403363 kubelet[2465]: E0913 00:04:55.403195 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:55.959664 kubelet[2465]: I0913 00:04:55.959622 2465 apiserver.go:52] "Watching apiserver" Sep 13 00:04:55.971925 kubelet[2465]: I0913 00:04:55.970367 2465 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:04:56.016009 kubelet[2465]: E0913 00:04:56.015600 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:56.016398 kubelet[2465]: E0913 00:04:56.016371 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:56.025290 kubelet[2465]: E0913 00:04:56.025160 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:04:56.025396 kubelet[2465]: E0913 00:04:56.025329 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:56.039791 kubelet[2465]: I0913 00:04:56.039632 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.039616435 podStartE2EDuration="3.039616435s" podCreationTimestamp="2025-09-13 00:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:56.039616315 +0000 UTC m=+1.137262838" watchObservedRunningTime="2025-09-13 00:04:56.039616435 +0000 UTC m=+1.137262958" Sep 13 00:04:56.046786 kubelet[2465]: I0913 00:04:56.046740 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.046722252 podStartE2EDuration="1.046722252s" podCreationTimestamp="2025-09-13 00:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:56.046628392 +0000 UTC m=+1.144274915" watchObservedRunningTime="2025-09-13 00:04:56.046722252 +0000 UTC m=+1.144368775" Sep 13 00:04:56.065549 kubelet[2465]: I0913 00:04:56.065495 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.065477783 podStartE2EDuration="1.065477783s" podCreationTimestamp="2025-09-13 00:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:56.056935244 +0000 UTC m=+1.154581767" watchObservedRunningTime="2025-09-13 00:04:56.065477783 +0000 UTC m=+1.163124306" Sep 13 00:04:57.016502 kubelet[2465]: E0913 00:04:57.016465 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:58.018328 kubelet[2465]: E0913 00:04:58.018300 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:04:59.962691 kubelet[2465]: I0913 00:04:59.962636 2465 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:04:59.963346 containerd[1445]: time="2025-09-13T00:04:59.962966012Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:04:59.964645 kubelet[2465]: I0913 00:04:59.963625 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:05:00.171343 kubelet[2465]: E0913 00:05:00.171309 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:00.877861 systemd[1]: Created slice kubepods-besteffort-podc577dbbd_ff91_4d9f_bbaa_45cde6b4ebe9.slice - libcontainer container kubepods-besteffort-podc577dbbd_ff91_4d9f_bbaa_45cde6b4ebe9.slice. Sep 13 00:05:00.917282 kubelet[2465]: I0913 00:05:00.917175 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9-xtables-lock\") pod \"kube-proxy-k555z\" (UID: \"c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9\") " pod="kube-system/kube-proxy-k555z" Sep 13 00:05:00.917282 kubelet[2465]: I0913 00:05:00.917223 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9-lib-modules\") pod \"kube-proxy-k555z\" (UID: \"c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9\") " pod="kube-system/kube-proxy-k555z" Sep 13 00:05:00.917282 kubelet[2465]: I0913 00:05:00.917240 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fx58\" (UniqueName: \"kubernetes.io/projected/c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9-kube-api-access-9fx58\") pod \"kube-proxy-k555z\" (UID: \"c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9\") " pod="kube-system/kube-proxy-k555z" Sep 13 00:05:00.917690 kubelet[2465]: I0913 00:05:00.917390 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9-kube-proxy\") pod \"kube-proxy-k555z\" (UID: \"c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9\") " pod="kube-system/kube-proxy-k555z" Sep 13 00:05:01.026296 kubelet[2465]: E0913 00:05:01.026126 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:01.056708 systemd[1]: Created slice kubepods-besteffort-poddf75ec53_5761_4176_99b0_d6bc95e58e23.slice - libcontainer container kubepods-besteffort-poddf75ec53_5761_4176_99b0_d6bc95e58e23.slice. Sep 13 00:05:01.118923 kubelet[2465]: I0913 00:05:01.118863 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df75ec53-5761-4176-99b0-d6bc95e58e23-var-lib-calico\") pod \"tigera-operator-58fc44c59b-l5xgr\" (UID: \"df75ec53-5761-4176-99b0-d6bc95e58e23\") " pod="tigera-operator/tigera-operator-58fc44c59b-l5xgr" Sep 13 00:05:01.119059 kubelet[2465]: I0913 00:05:01.118925 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb976\" (UniqueName: \"kubernetes.io/projected/df75ec53-5761-4176-99b0-d6bc95e58e23-kube-api-access-rb976\") pod \"tigera-operator-58fc44c59b-l5xgr\" (UID: \"df75ec53-5761-4176-99b0-d6bc95e58e23\") " pod="tigera-operator/tigera-operator-58fc44c59b-l5xgr" Sep 13 00:05:01.191686 kubelet[2465]: E0913 00:05:01.191461 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:01.193728 containerd[1445]: time="2025-09-13T00:05:01.193562085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k555z,Uid:c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:01.221920 containerd[1445]: time="2025-09-13T00:05:01.221767679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:01.222567 containerd[1445]: time="2025-09-13T00:05:01.222411590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:01.222783 containerd[1445]: time="2025-09-13T00:05:01.222471273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:01.222844 containerd[1445]: time="2025-09-13T00:05:01.222765727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:01.252273 systemd[1]: Started cri-containerd-ec8a5c385e8e5fedc4414b6e29421b0754a796eaaa175577901306d92e623b63.scope - libcontainer container ec8a5c385e8e5fedc4414b6e29421b0754a796eaaa175577901306d92e623b63. Sep 13 00:05:01.277927 containerd[1445]: time="2025-09-13T00:05:01.277874292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k555z,Uid:c577dbbd-ff91-4d9f-bbaa-45cde6b4ebe9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec8a5c385e8e5fedc4414b6e29421b0754a796eaaa175577901306d92e623b63\"" Sep 13 00:05:01.278844 kubelet[2465]: E0913 00:05:01.278820 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:01.281551 containerd[1445]: time="2025-09-13T00:05:01.281452224Z" level=info msg="CreateContainer within sandbox \"ec8a5c385e8e5fedc4414b6e29421b0754a796eaaa175577901306d92e623b63\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:05:01.298207 containerd[1445]: time="2025-09-13T00:05:01.298145985Z" level=info msg="CreateContainer within sandbox \"ec8a5c385e8e5fedc4414b6e29421b0754a796eaaa175577901306d92e623b63\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b345666dee295da037ee6187285d0361d550d8a0e47c95730f2ce1ace96a396\"" Sep 13 00:05:01.299980 containerd[1445]: time="2025-09-13T00:05:01.298624128Z" level=info msg="StartContainer for \"7b345666dee295da037ee6187285d0361d550d8a0e47c95730f2ce1ace96a396\"" Sep 13 00:05:01.328226 systemd[1]: Started cri-containerd-7b345666dee295da037ee6187285d0361d550d8a0e47c95730f2ce1ace96a396.scope - libcontainer container 7b345666dee295da037ee6187285d0361d550d8a0e47c95730f2ce1ace96a396. Sep 13 00:05:01.355545 containerd[1445]: time="2025-09-13T00:05:01.355469537Z" level=info msg="StartContainer for \"7b345666dee295da037ee6187285d0361d550d8a0e47c95730f2ce1ace96a396\" returns successfully" Sep 13 00:05:01.362799 containerd[1445]: time="2025-09-13T00:05:01.362441952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-l5xgr,Uid:df75ec53-5761-4176-99b0-d6bc95e58e23,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:05:01.387181 containerd[1445]: time="2025-09-13T00:05:01.386916367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:01.387181 containerd[1445]: time="2025-09-13T00:05:01.386975850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:01.387181 containerd[1445]: time="2025-09-13T00:05:01.386991811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:01.387181 containerd[1445]: time="2025-09-13T00:05:01.387081095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:01.408245 systemd[1]: Started cri-containerd-b072fc1baa28d46d27c7b4404cdc7cbdf72d0b8bd518a78884fcdeed9a150b51.scope - libcontainer container b072fc1baa28d46d27c7b4404cdc7cbdf72d0b8bd518a78884fcdeed9a150b51. Sep 13 00:05:01.442905 containerd[1445]: time="2025-09-13T00:05:01.442352388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-l5xgr,Uid:df75ec53-5761-4176-99b0-d6bc95e58e23,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b072fc1baa28d46d27c7b4404cdc7cbdf72d0b8bd518a78884fcdeed9a150b51\"" Sep 13 00:05:01.444675 containerd[1445]: time="2025-09-13T00:05:01.444644018Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:05:02.032366 kubelet[2465]: E0913 00:05:02.031881 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:02.052924 kubelet[2465]: I0913 00:05:02.052020 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k555z" podStartSLOduration=2.052001562 podStartE2EDuration="2.052001562s" podCreationTimestamp="2025-09-13 00:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:02.050529535 +0000 UTC m=+7.148176058" watchObservedRunningTime="2025-09-13 00:05:02.052001562 +0000 UTC m=+7.149648085" Sep 13 00:05:02.683638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734600913.mount: Deactivated successfully. Sep 13 00:05:03.229992 containerd[1445]: time="2025-09-13T00:05:03.229173489Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:03.229992 containerd[1445]: time="2025-09-13T00:05:03.229715473Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 00:05:03.231311 containerd[1445]: time="2025-09-13T00:05:03.231278780Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:03.234351 containerd[1445]: time="2025-09-13T00:05:03.234308710Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:03.235240 containerd[1445]: time="2025-09-13T00:05:03.235205308Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.790522648s" Sep 13 00:05:03.235367 containerd[1445]: time="2025-09-13T00:05:03.235349835Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:05:03.238887 containerd[1445]: time="2025-09-13T00:05:03.238850545Z" level=info msg="CreateContainer within sandbox \"b072fc1baa28d46d27c7b4404cdc7cbdf72d0b8bd518a78884fcdeed9a150b51\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:05:03.249858 containerd[1445]: time="2025-09-13T00:05:03.249797095Z" level=info msg="CreateContainer within sandbox \"b072fc1baa28d46d27c7b4404cdc7cbdf72d0b8bd518a78884fcdeed9a150b51\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"32c6c5f7b747b254efcc2341e9aee5bcfd180d9cd5a8edce12f45b6706db4c67\"" Sep 13 00:05:03.250478 containerd[1445]: time="2025-09-13T00:05:03.250449523Z" level=info msg="StartContainer for \"32c6c5f7b747b254efcc2341e9aee5bcfd180d9cd5a8edce12f45b6706db4c67\"" Sep 13 00:05:03.280250 systemd[1]: Started cri-containerd-32c6c5f7b747b254efcc2341e9aee5bcfd180d9cd5a8edce12f45b6706db4c67.scope - libcontainer container 32c6c5f7b747b254efcc2341e9aee5bcfd180d9cd5a8edce12f45b6706db4c67. Sep 13 00:05:03.300580 containerd[1445]: time="2025-09-13T00:05:03.300445911Z" level=info msg="StartContainer for \"32c6c5f7b747b254efcc2341e9aee5bcfd180d9cd5a8edce12f45b6706db4c67\" returns successfully" Sep 13 00:05:04.046951 kubelet[2465]: I0913 00:05:04.046897 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-l5xgr" podStartSLOduration=1.254555825 podStartE2EDuration="3.046879393s" podCreationTimestamp="2025-09-13 00:05:01 +0000 UTC" firstStartedPulling="2025-09-13 00:05:01.444203397 +0000 UTC m=+6.541849920" lastFinishedPulling="2025-09-13 00:05:03.236526965 +0000 UTC m=+8.334173488" observedRunningTime="2025-09-13 00:05:04.046678585 +0000 UTC m=+9.144325108" watchObservedRunningTime="2025-09-13 00:05:04.046879393 +0000 UTC m=+9.144525916" Sep 13 00:05:05.271032 kubelet[2465]: E0913 00:05:05.270976 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:06.047813 kubelet[2465]: E0913 00:05:06.047759 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:07.295994 kubelet[2465]: E0913 00:05:07.295953 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:08.521121 sudo[1624]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:08.525110 sshd[1621]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:08.528414 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:42970.service: Deactivated successfully. Sep 13 00:05:08.530182 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:05:08.530404 systemd[1]: session-7.scope: Consumed 6.006s CPU time, 151.5M memory peak, 0B memory swap peak. Sep 13 00:05:08.531422 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:05:08.533444 systemd-logind[1428]: Removed session 7. Sep 13 00:05:10.101147 update_engine[1433]: I20250913 00:05:10.101075 1433 update_attempter.cc:509] Updating boot flags... Sep 13 00:05:10.137090 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2881) Sep 13 00:05:14.912688 systemd[1]: Created slice kubepods-besteffort-pod509f6a3f_f7fd_4af7_a9da_5a6c6e430544.slice - libcontainer container kubepods-besteffort-pod509f6a3f_f7fd_4af7_a9da_5a6c6e430544.slice. Sep 13 00:05:14.914260 kubelet[2465]: I0913 00:05:14.914229 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/509f6a3f-f7fd-4af7-a9da-5a6c6e430544-typha-certs\") pod \"calico-typha-6bb854d69f-qvhzb\" (UID: \"509f6a3f-f7fd-4af7-a9da-5a6c6e430544\") " pod="calico-system/calico-typha-6bb854d69f-qvhzb" Sep 13 00:05:14.914541 kubelet[2465]: I0913 00:05:14.914270 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/509f6a3f-f7fd-4af7-a9da-5a6c6e430544-tigera-ca-bundle\") pod \"calico-typha-6bb854d69f-qvhzb\" (UID: \"509f6a3f-f7fd-4af7-a9da-5a6c6e430544\") " pod="calico-system/calico-typha-6bb854d69f-qvhzb" Sep 13 00:05:14.914541 kubelet[2465]: I0913 00:05:14.914294 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9ld\" (UniqueName: \"kubernetes.io/projected/509f6a3f-f7fd-4af7-a9da-5a6c6e430544-kube-api-access-5k9ld\") pod \"calico-typha-6bb854d69f-qvhzb\" (UID: \"509f6a3f-f7fd-4af7-a9da-5a6c6e430544\") " pod="calico-system/calico-typha-6bb854d69f-qvhzb" Sep 13 00:05:15.155020 systemd[1]: Created slice kubepods-besteffort-podebf4a570_5ced_4f02_bbc9_43cb8099fdca.slice - libcontainer container kubepods-besteffort-podebf4a570_5ced_4f02_bbc9_43cb8099fdca.slice. Sep 13 00:05:15.216536 kubelet[2465]: E0913 00:05:15.216407 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:15.216536 kubelet[2465]: I0913 00:05:15.216494 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-lib-modules\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216536 kubelet[2465]: I0913 00:05:15.216529 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-tigera-ca-bundle\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216694 kubelet[2465]: I0913 00:05:15.216547 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-var-lib-calico\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216694 kubelet[2465]: I0913 00:05:15.216566 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-policysync\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216694 kubelet[2465]: I0913 00:05:15.216582 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrl8\" (UniqueName: \"kubernetes.io/projected/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-kube-api-access-2vrl8\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216694 kubelet[2465]: I0913 00:05:15.216601 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-cni-bin-dir\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216694 kubelet[2465]: I0913 00:05:15.216617 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-cni-log-dir\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216806 kubelet[2465]: I0913 00:05:15.216634 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-node-certs\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216806 kubelet[2465]: I0913 00:05:15.216649 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-xtables-lock\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216806 kubelet[2465]: I0913 00:05:15.216667 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-flexvol-driver-host\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216806 kubelet[2465]: I0913 00:05:15.216684 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-var-run-calico\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.216806 kubelet[2465]: I0913 00:05:15.216755 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ebf4a570-5ced-4f02-bbc9-43cb8099fdca-cni-net-dir\") pod \"calico-node-g29x9\" (UID: \"ebf4a570-5ced-4f02-bbc9-43cb8099fdca\") " pod="calico-system/calico-node-g29x9" Sep 13 00:05:15.217564 containerd[1445]: time="2025-09-13T00:05:15.217011957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb854d69f-qvhzb,Uid:509f6a3f-f7fd-4af7-a9da-5a6c6e430544,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:15.248558 containerd[1445]: time="2025-09-13T00:05:15.248437686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:15.248558 containerd[1445]: time="2025-09-13T00:05:15.248520728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:15.248558 containerd[1445]: time="2025-09-13T00:05:15.248533089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:15.248817 containerd[1445]: time="2025-09-13T00:05:15.248638651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:15.266359 systemd[1]: Started cri-containerd-bd89e408dad37b6291912d397fdbba1345a446721a228c4f2099f8c64e084055.scope - libcontainer container bd89e408dad37b6291912d397fdbba1345a446721a228c4f2099f8c64e084055. Sep 13 00:05:15.292877 containerd[1445]: time="2025-09-13T00:05:15.292836037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb854d69f-qvhzb,Uid:509f6a3f-f7fd-4af7-a9da-5a6c6e430544,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd89e408dad37b6291912d397fdbba1345a446721a228c4f2099f8c64e084055\"" Sep 13 00:05:15.293927 kubelet[2465]: E0913 00:05:15.293711 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:15.295978 containerd[1445]: time="2025-09-13T00:05:15.295930869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:05:15.326880 kubelet[2465]: E0913 00:05:15.326829 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.326880 kubelet[2465]: W0913 00:05:15.326858 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.326880 kubelet[2465]: E0913 00:05:15.326881 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.337770 kubelet[2465]: E0913 00:05:15.337735 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.337770 kubelet[2465]: W0913 00:05:15.337755 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.337770 kubelet[2465]: E0913 00:05:15.337773 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.403940 kubelet[2465]: E0913 00:05:15.403883 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5t74" podUID="7d15f87e-d4b3-4e20-9451-06b0fba27ad4" Sep 13 00:05:15.410833 kubelet[2465]: E0913 00:05:15.410794 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.410833 kubelet[2465]: W0913 00:05:15.410818 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.410833 kubelet[2465]: E0913 00:05:15.410838 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.411098 kubelet[2465]: E0913 00:05:15.411079 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.411098 kubelet[2465]: W0913 00:05:15.411093 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.411155 kubelet[2465]: E0913 00:05:15.411109 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.411922 kubelet[2465]: E0913 00:05:15.411885 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.411922 kubelet[2465]: W0913 00:05:15.411908 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.411922 kubelet[2465]: E0913 00:05:15.411921 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.412131 kubelet[2465]: E0913 00:05:15.412111 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.412131 kubelet[2465]: W0913 00:05:15.412125 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.412210 kubelet[2465]: E0913 00:05:15.412136 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.412396 kubelet[2465]: E0913 00:05:15.412318 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.412396 kubelet[2465]: W0913 00:05:15.412329 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.412396 kubelet[2465]: E0913 00:05:15.412339 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.412987 kubelet[2465]: E0913 00:05:15.412909 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.412987 kubelet[2465]: W0913 00:05:15.412926 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.412987 kubelet[2465]: E0913 00:05:15.412939 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.413156 kubelet[2465]: E0913 00:05:15.413135 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.413156 kubelet[2465]: W0913 00:05:15.413156 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.413226 kubelet[2465]: E0913 00:05:15.413165 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.413456 kubelet[2465]: E0913 00:05:15.413421 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.413456 kubelet[2465]: W0913 00:05:15.413429 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.413456 kubelet[2465]: E0913 00:05:15.413436 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.413805 kubelet[2465]: E0913 00:05:15.413620 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.413805 kubelet[2465]: W0913 00:05:15.413631 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.413805 kubelet[2465]: E0913 00:05:15.413640 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.413906 kubelet[2465]: E0913 00:05:15.413895 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.413906 kubelet[2465]: W0913 00:05:15.413903 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.413946 kubelet[2465]: E0913 00:05:15.413911 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.414200 kubelet[2465]: E0913 00:05:15.414126 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.414200 kubelet[2465]: W0913 00:05:15.414140 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.414200 kubelet[2465]: E0913 00:05:15.414147 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.414990 kubelet[2465]: E0913 00:05:15.414969 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.415094 kubelet[2465]: W0913 00:05:15.414999 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.415094 kubelet[2465]: E0913 00:05:15.415012 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.416099 kubelet[2465]: E0913 00:05:15.416073 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.416099 kubelet[2465]: W0913 00:05:15.416091 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.416099 kubelet[2465]: E0913 00:05:15.416102 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.416523 kubelet[2465]: E0913 00:05:15.416369 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.416523 kubelet[2465]: W0913 00:05:15.416387 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.416523 kubelet[2465]: E0913 00:05:15.416397 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.417118 kubelet[2465]: E0913 00:05:15.417033 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.417118 kubelet[2465]: W0913 00:05:15.417112 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.417118 kubelet[2465]: E0913 00:05:15.417125 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.417368 kubelet[2465]: E0913 00:05:15.417316 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.417368 kubelet[2465]: W0913 00:05:15.417331 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.417368 kubelet[2465]: E0913 00:05:15.417340 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.417709 kubelet[2465]: E0913 00:05:15.417695 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.417748 kubelet[2465]: W0913 00:05:15.417710 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.417748 kubelet[2465]: E0913 00:05:15.417722 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.417886 kubelet[2465]: E0913 00:05:15.417873 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.417886 kubelet[2465]: W0913 00:05:15.417885 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.417932 kubelet[2465]: E0913 00:05:15.417893 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.418069 kubelet[2465]: E0913 00:05:15.418058 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.418069 kubelet[2465]: W0913 00:05:15.418068 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.418143 kubelet[2465]: E0913 00:05:15.418076 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.418896 kubelet[2465]: E0913 00:05:15.418872 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.418896 kubelet[2465]: W0913 00:05:15.418890 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.418896 kubelet[2465]: E0913 00:05:15.418900 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.419430 kubelet[2465]: E0913 00:05:15.419247 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.419430 kubelet[2465]: W0913 00:05:15.419258 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.419430 kubelet[2465]: E0913 00:05:15.419268 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.419430 kubelet[2465]: I0913 00:05:15.419293 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7d15f87e-d4b3-4e20-9451-06b0fba27ad4-socket-dir\") pod \"csi-node-driver-c5t74\" (UID: \"7d15f87e-d4b3-4e20-9451-06b0fba27ad4\") " pod="calico-system/csi-node-driver-c5t74" Sep 13 00:05:15.419971 kubelet[2465]: E0913 00:05:15.419888 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.419971 kubelet[2465]: W0913 00:05:15.419930 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.419971 kubelet[2465]: E0913 00:05:15.419954 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.419971 kubelet[2465]: I0913 00:05:15.419971 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlt5j\" (UniqueName: \"kubernetes.io/projected/7d15f87e-d4b3-4e20-9451-06b0fba27ad4-kube-api-access-tlt5j\") pod \"csi-node-driver-c5t74\" (UID: \"7d15f87e-d4b3-4e20-9451-06b0fba27ad4\") " pod="calico-system/csi-node-driver-c5t74" Sep 13 00:05:15.420357 kubelet[2465]: E0913 00:05:15.420255 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.420357 kubelet[2465]: W0913 00:05:15.420272 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.420357 kubelet[2465]: E0913 00:05:15.420291 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.420718 kubelet[2465]: E0913 00:05:15.420689 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.420718 kubelet[2465]: W0913 00:05:15.420707 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.421116 kubelet[2465]: E0913 00:05:15.420724 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.421116 kubelet[2465]: E0913 00:05:15.420977 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.421116 kubelet[2465]: W0913 00:05:15.420989 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.421116 kubelet[2465]: E0913 00:05:15.421008 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.421116 kubelet[2465]: I0913 00:05:15.421028 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7d15f87e-d4b3-4e20-9451-06b0fba27ad4-registration-dir\") pod \"csi-node-driver-c5t74\" (UID: \"7d15f87e-d4b3-4e20-9451-06b0fba27ad4\") " pod="calico-system/csi-node-driver-c5t74" Sep 13 00:05:15.421525 kubelet[2465]: E0913 00:05:15.421311 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.421525 kubelet[2465]: W0913 00:05:15.421333 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.421525 kubelet[2465]: E0913 00:05:15.421365 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.421525 kubelet[2465]: I0913 00:05:15.421400 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7d15f87e-d4b3-4e20-9451-06b0fba27ad4-varrun\") pod \"csi-node-driver-c5t74\" (UID: \"7d15f87e-d4b3-4e20-9451-06b0fba27ad4\") " pod="calico-system/csi-node-driver-c5t74" Sep 13 00:05:15.421959 kubelet[2465]: E0913 00:05:15.421605 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.421959 kubelet[2465]: W0913 00:05:15.421617 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.421959 kubelet[2465]: E0913 00:05:15.421681 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.421959 kubelet[2465]: E0913 00:05:15.421913 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.421959 kubelet[2465]: W0913 00:05:15.421934 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.421959 kubelet[2465]: E0913 00:05:15.421953 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.422292 kubelet[2465]: E0913 00:05:15.422264 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.422292 kubelet[2465]: W0913 00:05:15.422280 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.422443 kubelet[2465]: E0913 00:05:15.422299 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.422443 kubelet[2465]: I0913 00:05:15.422323 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d15f87e-d4b3-4e20-9451-06b0fba27ad4-kubelet-dir\") pod \"csi-node-driver-c5t74\" (UID: \"7d15f87e-d4b3-4e20-9451-06b0fba27ad4\") " pod="calico-system/csi-node-driver-c5t74" Sep 13 00:05:15.422669 kubelet[2465]: E0913 00:05:15.422505 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.422669 kubelet[2465]: W0913 00:05:15.422515 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.422669 kubelet[2465]: E0913 00:05:15.422530 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.422752 kubelet[2465]: E0913 00:05:15.422702 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.422752 kubelet[2465]: W0913 00:05:15.422711 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.422752 kubelet[2465]: E0913 00:05:15.422722 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.422752 kubelet[2465]: E0913 00:05:15.422930 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.422752 kubelet[2465]: W0913 00:05:15.422942 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.422752 kubelet[2465]: E0913 00:05:15.422959 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.423247 kubelet[2465]: E0913 00:05:15.423159 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.423247 kubelet[2465]: W0913 00:05:15.423170 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.423247 kubelet[2465]: E0913 00:05:15.423179 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.423425 kubelet[2465]: E0913 00:05:15.423403 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.423425 kubelet[2465]: W0913 00:05:15.423420 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.423507 kubelet[2465]: E0913 00:05:15.423433 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.423635 kubelet[2465]: E0913 00:05:15.423617 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.423635 kubelet[2465]: W0913 00:05:15.423630 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.425167 kubelet[2465]: E0913 00:05:15.423640 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.459633 containerd[1445]: time="2025-09-13T00:05:15.459591467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g29x9,Uid:ebf4a570-5ced-4f02-bbc9-43cb8099fdca,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:15.482670 containerd[1445]: time="2025-09-13T00:05:15.482512439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:15.482828 containerd[1445]: time="2025-09-13T00:05:15.482669563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:15.482828 containerd[1445]: time="2025-09-13T00:05:15.482691324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:15.482902 containerd[1445]: time="2025-09-13T00:05:15.482839167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:15.503256 systemd[1]: Started cri-containerd-735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581.scope - libcontainer container 735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581. Sep 13 00:05:15.524068 kubelet[2465]: E0913 00:05:15.524021 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.524068 kubelet[2465]: W0913 00:05:15.524065 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.524333 kubelet[2465]: E0913 00:05:15.524085 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.524333 kubelet[2465]: E0913 00:05:15.524300 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.524447 kubelet[2465]: W0913 00:05:15.524316 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.524447 kubelet[2465]: E0913 00:05:15.524362 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.524967 kubelet[2465]: E0913 00:05:15.524878 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.524967 kubelet[2465]: W0913 00:05:15.524894 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.524967 kubelet[2465]: E0913 00:05:15.524915 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.525224 containerd[1445]: time="2025-09-13T00:05:15.525184390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g29x9,Uid:ebf4a570-5ced-4f02-bbc9-43cb8099fdca,Namespace:calico-system,Attempt:0,} returns sandbox id \"735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581\"" Sep 13 00:05:15.525742 kubelet[2465]: E0913 00:05:15.525712 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.525742 kubelet[2465]: W0913 00:05:15.525732 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.525948 kubelet[2465]: E0913 00:05:15.525746 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.526174 kubelet[2465]: E0913 00:05:15.526083 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.526174 kubelet[2465]: W0913 00:05:15.526106 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.526260 kubelet[2465]: E0913 00:05:15.526210 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.526576 kubelet[2465]: E0913 00:05:15.526557 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.526705 kubelet[2465]: W0913 00:05:15.526575 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.526827 kubelet[2465]: E0913 00:05:15.526728 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.527572 kubelet[2465]: E0913 00:05:15.527553 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.527926 kubelet[2465]: W0913 00:05:15.527571 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.527926 kubelet[2465]: E0913 00:05:15.527780 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.528190 kubelet[2465]: E0913 00:05:15.528172 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.528190 kubelet[2465]: W0913 00:05:15.528189 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.528880 kubelet[2465]: E0913 00:05:15.528254 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.530019 kubelet[2465]: E0913 00:05:15.529997 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.530207 kubelet[2465]: W0913 00:05:15.530022 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.530409 kubelet[2465]: E0913 00:05:15.530283 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.530489 kubelet[2465]: E0913 00:05:15.530468 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.530753 kubelet[2465]: W0913 00:05:15.530487 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.530888 kubelet[2465]: E0913 00:05:15.530824 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.531500 kubelet[2465]: E0913 00:05:15.531475 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.531500 kubelet[2465]: W0913 00:05:15.531495 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538043 kubelet[2465]: E0913 00:05:15.531580 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538043 kubelet[2465]: E0913 00:05:15.531936 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538043 kubelet[2465]: W0913 00:05:15.531950 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538043 kubelet[2465]: E0913 00:05:15.531984 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538043 kubelet[2465]: E0913 00:05:15.532389 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538043 kubelet[2465]: W0913 00:05:15.532402 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538043 kubelet[2465]: E0913 00:05:15.532443 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538043 kubelet[2465]: E0913 00:05:15.532612 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538043 kubelet[2465]: W0913 00:05:15.532621 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538043 kubelet[2465]: E0913 00:05:15.532646 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538353 kubelet[2465]: E0913 00:05:15.532787 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538353 kubelet[2465]: W0913 00:05:15.532795 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538353 kubelet[2465]: E0913 00:05:15.532826 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538353 kubelet[2465]: E0913 00:05:15.532938 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538353 kubelet[2465]: W0913 00:05:15.532946 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538353 kubelet[2465]: E0913 00:05:15.532971 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538353 kubelet[2465]: E0913 00:05:15.533134 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538353 kubelet[2465]: W0913 00:05:15.533142 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538353 kubelet[2465]: E0913 00:05:15.533159 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538353 kubelet[2465]: E0913 00:05:15.533386 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538965 kubelet[2465]: W0913 00:05:15.533400 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538965 kubelet[2465]: E0913 00:05:15.533419 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538965 kubelet[2465]: E0913 00:05:15.533664 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538965 kubelet[2465]: W0913 00:05:15.533677 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538965 kubelet[2465]: E0913 00:05:15.533694 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.538965 kubelet[2465]: E0913 00:05:15.533900 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.538965 kubelet[2465]: W0913 00:05:15.533907 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.538965 kubelet[2465]: E0913 00:05:15.533916 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.544209 kubelet[2465]: E0913 00:05:15.544184 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.544209 kubelet[2465]: W0913 00:05:15.544209 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.544518 kubelet[2465]: E0913 00:05:15.544287 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.544518 kubelet[2465]: E0913 00:05:15.544472 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.544518 kubelet[2465]: W0913 00:05:15.544484 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.544657 kubelet[2465]: E0913 00:05:15.544605 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.545052 kubelet[2465]: E0913 00:05:15.545021 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.545107 kubelet[2465]: W0913 00:05:15.545036 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.545163 kubelet[2465]: E0913 00:05:15.545107 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.545623 kubelet[2465]: E0913 00:05:15.545608 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.545661 kubelet[2465]: W0913 00:05:15.545623 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.546238 kubelet[2465]: E0913 00:05:15.545707 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.546238 kubelet[2465]: E0913 00:05:15.545841 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.546238 kubelet[2465]: W0913 00:05:15.545852 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.546238 kubelet[2465]: E0913 00:05:15.545862 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:15.551202 kubelet[2465]: E0913 00:05:15.551176 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:15.551202 kubelet[2465]: W0913 00:05:15.551198 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:15.551310 kubelet[2465]: E0913 00:05:15.551216 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:16.431942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846410401.mount: Deactivated successfully. Sep 13 00:05:16.988903 kubelet[2465]: E0913 00:05:16.988856 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5t74" podUID="7d15f87e-d4b3-4e20-9451-06b0fba27ad4" Sep 13 00:05:17.041792 containerd[1445]: time="2025-09-13T00:05:17.041718571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:17.043342 containerd[1445]: time="2025-09-13T00:05:17.043128561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 13 00:05:17.046388 containerd[1445]: time="2025-09-13T00:05:17.046325669Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:17.050682 containerd[1445]: time="2025-09-13T00:05:17.050643840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.754458086s" Sep 13 00:05:17.050794 containerd[1445]: time="2025-09-13T00:05:17.050776963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:05:17.051557 containerd[1445]: time="2025-09-13T00:05:17.051530859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:17.052017 containerd[1445]: time="2025-09-13T00:05:17.051864626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:05:17.064119 containerd[1445]: time="2025-09-13T00:05:17.064081765Z" level=info msg="CreateContainer within sandbox \"bd89e408dad37b6291912d397fdbba1345a446721a228c4f2099f8c64e084055\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:05:17.077837 containerd[1445]: time="2025-09-13T00:05:17.077697653Z" level=info msg="CreateContainer within sandbox \"bd89e408dad37b6291912d397fdbba1345a446721a228c4f2099f8c64e084055\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"762bd078069ffd63a78194278c142ee2e45837ec3cf461f9adaa7eabbe602413\"" Sep 13 00:05:17.079086 containerd[1445]: time="2025-09-13T00:05:17.078398108Z" level=info msg="StartContainer for \"762bd078069ffd63a78194278c142ee2e45837ec3cf461f9adaa7eabbe602413\"" Sep 13 00:05:17.111241 systemd[1]: Started cri-containerd-762bd078069ffd63a78194278c142ee2e45837ec3cf461f9adaa7eabbe602413.scope - libcontainer container 762bd078069ffd63a78194278c142ee2e45837ec3cf461f9adaa7eabbe602413. Sep 13 00:05:17.145986 containerd[1445]: time="2025-09-13T00:05:17.145939777Z" level=info msg="StartContainer for \"762bd078069ffd63a78194278c142ee2e45837ec3cf461f9adaa7eabbe602413\" returns successfully" Sep 13 00:05:18.080784 kubelet[2465]: E0913 00:05:18.080752 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:18.116320 kubelet[2465]: I0913 00:05:18.116250 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bb854d69f-qvhzb" podStartSLOduration=2.360100566 podStartE2EDuration="4.116235328s" podCreationTimestamp="2025-09-13 00:05:14 +0000 UTC" firstStartedPulling="2025-09-13 00:05:15.295606421 +0000 UTC m=+20.393252944" lastFinishedPulling="2025-09-13 00:05:17.051741183 +0000 UTC m=+22.149387706" observedRunningTime="2025-09-13 00:05:18.099340426 +0000 UTC m=+23.196986949" watchObservedRunningTime="2025-09-13 00:05:18.116235328 +0000 UTC m=+23.213881851" Sep 13 00:05:18.138597 kubelet[2465]: E0913 00:05:18.138441 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.138597 kubelet[2465]: W0913 00:05:18.138472 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.138597 kubelet[2465]: E0913 00:05:18.138502 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.138955 kubelet[2465]: E0913 00:05:18.138696 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.138955 kubelet[2465]: W0913 00:05:18.138706 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.138955 kubelet[2465]: E0913 00:05:18.138716 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.139661 kubelet[2465]: E0913 00:05:18.139453 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.139661 kubelet[2465]: W0913 00:05:18.139500 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.139661 kubelet[2465]: E0913 00:05:18.139514 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.140443 kubelet[2465]: E0913 00:05:18.139937 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.140443 kubelet[2465]: W0913 00:05:18.139951 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.140443 kubelet[2465]: E0913 00:05:18.140003 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.141249 kubelet[2465]: E0913 00:05:18.141115 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.141249 kubelet[2465]: W0913 00:05:18.141153 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.141249 kubelet[2465]: E0913 00:05:18.141165 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.141555 kubelet[2465]: E0913 00:05:18.141357 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.141555 kubelet[2465]: W0913 00:05:18.141367 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.141555 kubelet[2465]: E0913 00:05:18.141377 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.141924 kubelet[2465]: E0913 00:05:18.141769 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.141924 kubelet[2465]: W0913 00:05:18.141781 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.141924 kubelet[2465]: E0913 00:05:18.141793 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.142494 kubelet[2465]: E0913 00:05:18.142360 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.142494 kubelet[2465]: W0913 00:05:18.142378 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.142494 kubelet[2465]: E0913 00:05:18.142390 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.142710 kubelet[2465]: E0913 00:05:18.142669 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.142710 kubelet[2465]: W0913 00:05:18.142681 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.142710 kubelet[2465]: E0913 00:05:18.142692 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.143155 kubelet[2465]: E0913 00:05:18.143007 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.143155 kubelet[2465]: W0913 00:05:18.143020 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.143155 kubelet[2465]: E0913 00:05:18.143031 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.143597 kubelet[2465]: E0913 00:05:18.143316 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.143597 kubelet[2465]: W0913 00:05:18.143530 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.143597 kubelet[2465]: E0913 00:05:18.143546 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.144137 kubelet[2465]: E0913 00:05:18.143965 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.144137 kubelet[2465]: W0913 00:05:18.144008 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.144137 kubelet[2465]: E0913 00:05:18.144024 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.144528 kubelet[2465]: E0913 00:05:18.144353 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.144528 kubelet[2465]: W0913 00:05:18.144366 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.144528 kubelet[2465]: E0913 00:05:18.144399 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.144994 kubelet[2465]: E0913 00:05:18.144874 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.144994 kubelet[2465]: W0913 00:05:18.144888 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.144994 kubelet[2465]: E0913 00:05:18.144906 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.145286 containerd[1445]: time="2025-09-13T00:05:18.144849227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:18.145933 kubelet[2465]: E0913 00:05:18.145098 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.145933 kubelet[2465]: W0913 00:05:18.145107 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.145933 kubelet[2465]: E0913 00:05:18.145117 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.146315 containerd[1445]: time="2025-09-13T00:05:18.146285416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 13 00:05:18.146389 kubelet[2465]: E0913 00:05:18.146351 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.146389 kubelet[2465]: W0913 00:05:18.146361 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.146389 kubelet[2465]: E0913 00:05:18.146374 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.146575 kubelet[2465]: E0913 00:05:18.146561 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.146575 kubelet[2465]: W0913 00:05:18.146572 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.146701 kubelet[2465]: E0913 00:05:18.146581 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.146777 kubelet[2465]: E0913 00:05:18.146765 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.146777 kubelet[2465]: W0913 00:05:18.146777 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.146845 kubelet[2465]: E0913 00:05:18.146790 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.146985 kubelet[2465]: E0913 00:05:18.146975 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.146985 kubelet[2465]: W0913 00:05:18.146984 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.147064 kubelet[2465]: E0913 00:05:18.146997 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.147207 kubelet[2465]: E0913 00:05:18.147166 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.147207 kubelet[2465]: W0913 00:05:18.147175 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.147207 kubelet[2465]: E0913 00:05:18.147187 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.147399 kubelet[2465]: E0913 00:05:18.147307 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.147399 kubelet[2465]: W0913 00:05:18.147313 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.147399 kubelet[2465]: E0913 00:05:18.147320 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.147535 kubelet[2465]: E0913 00:05:18.147502 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.147535 kubelet[2465]: W0913 00:05:18.147512 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.147535 kubelet[2465]: E0913 00:05:18.147524 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.148164 containerd[1445]: time="2025-09-13T00:05:18.147957770Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:18.148379 kubelet[2465]: E0913 00:05:18.148294 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.148379 kubelet[2465]: W0913 00:05:18.148307 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.148379 kubelet[2465]: E0913 00:05:18.148328 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.148651 kubelet[2465]: E0913 00:05:18.148638 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.148811 kubelet[2465]: W0913 00:05:18.148743 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.148811 kubelet[2465]: E0913 00:05:18.148795 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.149424 kubelet[2465]: E0913 00:05:18.149007 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.149679 kubelet[2465]: W0913 00:05:18.149540 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.149679 kubelet[2465]: E0913 00:05:18.149591 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.149910 kubelet[2465]: E0913 00:05:18.149899 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.149961 containerd[1445]: time="2025-09-13T00:05:18.149919449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:18.150062 kubelet[2465]: W0913 00:05:18.150000 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.150127 kubelet[2465]: E0913 00:05:18.150107 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.150308 kubelet[2465]: E0913 00:05:18.150296 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.150407 kubelet[2465]: W0913 00:05:18.150359 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.150407 kubelet[2465]: E0913 00:05:18.150390 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.150700 kubelet[2465]: E0913 00:05:18.150615 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.150700 kubelet[2465]: W0913 00:05:18.150624 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.150700 kubelet[2465]: E0913 00:05:18.150639 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.150810 containerd[1445]: time="2025-09-13T00:05:18.150782827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.098891881s" Sep 13 00:05:18.150883 containerd[1445]: time="2025-09-13T00:05:18.150813627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:05:18.151220 kubelet[2465]: E0913 00:05:18.151209 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.151281 kubelet[2465]: W0913 00:05:18.151270 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.151339 kubelet[2465]: E0913 00:05:18.151329 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.152077 kubelet[2465]: E0913 00:05:18.152036 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.152151 kubelet[2465]: W0913 00:05:18.152140 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.152620 kubelet[2465]: E0913 00:05:18.152391 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.152620 kubelet[2465]: W0913 00:05:18.152402 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.152620 kubelet[2465]: E0913 00:05:18.152413 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.152620 kubelet[2465]: E0913 00:05:18.152545 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.152806 kubelet[2465]: E0913 00:05:18.152794 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.152854 kubelet[2465]: W0913 00:05:18.152845 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.152910 kubelet[2465]: E0913 00:05:18.152899 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.153190 kubelet[2465]: E0913 00:05:18.153174 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:18.153286 kubelet[2465]: W0913 00:05:18.153272 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:18.153352 kubelet[2465]: E0913 00:05:18.153340 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:18.153410 containerd[1445]: time="2025-09-13T00:05:18.153376159Z" level=info msg="CreateContainer within sandbox \"735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:05:18.166208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713668453.mount: Deactivated successfully. Sep 13 00:05:18.168838 containerd[1445]: time="2025-09-13T00:05:18.168802271Z" level=info msg="CreateContainer within sandbox \"735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0\"" Sep 13 00:05:18.169672 containerd[1445]: time="2025-09-13T00:05:18.169638808Z" level=info msg="StartContainer for \"fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0\"" Sep 13 00:05:18.200169 systemd[1]: Started cri-containerd-fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0.scope - libcontainer container fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0. Sep 13 00:05:18.227607 containerd[1445]: time="2025-09-13T00:05:18.227569141Z" level=info msg="StartContainer for \"fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0\" returns successfully" Sep 13 00:05:18.239208 systemd[1]: cri-containerd-fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0.scope: Deactivated successfully. Sep 13 00:05:18.360567 containerd[1445]: time="2025-09-13T00:05:18.351773735Z" level=info msg="shim disconnected" id=fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0 namespace=k8s.io Sep 13 00:05:18.360567 containerd[1445]: time="2025-09-13T00:05:18.360486031Z" level=warning msg="cleaning up after shim disconnected" id=fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0 namespace=k8s.io Sep 13 00:05:18.360567 containerd[1445]: time="2025-09-13T00:05:18.360509711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:05:18.989380 kubelet[2465]: E0913 00:05:18.988900 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5t74" podUID="7d15f87e-d4b3-4e20-9451-06b0fba27ad4" Sep 13 00:05:19.060392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fccc4b0b7f79f8de8769d0e91441fd6d1b29fa4d9690e42a065f3d409a3de0c0-rootfs.mount: Deactivated successfully. Sep 13 00:05:19.085920 kubelet[2465]: E0913 00:05:19.085825 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:19.091383 containerd[1445]: time="2025-09-13T00:05:19.091066658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:05:20.086167 kubelet[2465]: E0913 00:05:20.085927 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:20.989246 kubelet[2465]: E0913 00:05:20.989191 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5t74" podUID="7d15f87e-d4b3-4e20-9451-06b0fba27ad4" Sep 13 00:05:21.660755 containerd[1445]: time="2025-09-13T00:05:21.660688611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:21.664296 containerd[1445]: time="2025-09-13T00:05:21.664246515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:05:21.666655 containerd[1445]: time="2025-09-13T00:05:21.666596436Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:21.669543 containerd[1445]: time="2025-09-13T00:05:21.669497448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:21.670662 containerd[1445]: time="2025-09-13T00:05:21.670297102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.579193483s" Sep 13 00:05:21.670662 containerd[1445]: time="2025-09-13T00:05:21.670337103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:05:21.673159 containerd[1445]: time="2025-09-13T00:05:21.673084752Z" level=info msg="CreateContainer within sandbox \"735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:05:21.704481 containerd[1445]: time="2025-09-13T00:05:21.704230306Z" level=info msg="CreateContainer within sandbox \"735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e\"" Sep 13 00:05:21.705605 containerd[1445]: time="2025-09-13T00:05:21.704983759Z" level=info msg="StartContainer for \"7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e\"" Sep 13 00:05:21.742276 systemd[1]: Started cri-containerd-7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e.scope - libcontainer container 7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e. Sep 13 00:05:21.779216 containerd[1445]: time="2025-09-13T00:05:21.779094398Z" level=info msg="StartContainer for \"7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e\" returns successfully" Sep 13 00:05:22.360469 systemd[1]: cri-containerd-7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e.scope: Deactivated successfully. Sep 13 00:05:22.385393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e-rootfs.mount: Deactivated successfully. Sep 13 00:05:22.398431 containerd[1445]: time="2025-09-13T00:05:22.398228447Z" level=info msg="shim disconnected" id=7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e namespace=k8s.io Sep 13 00:05:22.398431 containerd[1445]: time="2025-09-13T00:05:22.398281968Z" level=warning msg="cleaning up after shim disconnected" id=7b44c49a0390e71e61692e5e974109ddfa760d557e84983722cb0071c81f787e namespace=k8s.io Sep 13 00:05:22.398431 containerd[1445]: time="2025-09-13T00:05:22.398290288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:05:22.452878 kubelet[2465]: I0913 00:05:22.452833 2465 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:05:22.503961 systemd[1]: Created slice kubepods-burstable-podaf8d6313_8016_4b63_b286_8fb59033218e.slice - libcontainer container kubepods-burstable-podaf8d6313_8016_4b63_b286_8fb59033218e.slice. Sep 13 00:05:22.514681 systemd[1]: Created slice kubepods-besteffort-podf5742ff1_d3ec_4bea_89be_500213d11650.slice - libcontainer container kubepods-besteffort-podf5742ff1_d3ec_4bea_89be_500213d11650.slice. Sep 13 00:05:22.530728 systemd[1]: Created slice kubepods-burstable-pod37010d28_d42d_4197_85d1_065db74b5133.slice - libcontainer container kubepods-burstable-pod37010d28_d42d_4197_85d1_065db74b5133.slice. Sep 13 00:05:22.533442 systemd[1]: Created slice kubepods-besteffort-pod071f06bb_6417_46e6_bfae_280329d73932.slice - libcontainer container kubepods-besteffort-pod071f06bb_6417_46e6_bfae_280329d73932.slice. Sep 13 00:05:22.543281 systemd[1]: Created slice kubepods-besteffort-pod7691cf75_c4db_4e69_bd59_9bd4189e2702.slice - libcontainer container kubepods-besteffort-pod7691cf75_c4db_4e69_bd59_9bd4189e2702.slice. Sep 13 00:05:22.551004 systemd[1]: Created slice kubepods-besteffort-pod616325c3_a8fc_46b9_a361_13ec6d383dfa.slice - libcontainer container kubepods-besteffort-pod616325c3_a8fc_46b9_a361_13ec6d383dfa.slice. Sep 13 00:05:22.555096 systemd[1]: Created slice kubepods-besteffort-podfa29da06_fe61_4e04_b4a0_a7f57e663902.slice - libcontainer container kubepods-besteffort-podfa29da06_fe61_4e04_b4a0_a7f57e663902.slice. Sep 13 00:05:22.580217 kubelet[2465]: I0913 00:05:22.580176 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqvzz\" (UniqueName: \"kubernetes.io/projected/7691cf75-c4db-4e69-bd59-9bd4189e2702-kube-api-access-fqvzz\") pod \"calico-apiserver-5c55569db5-8jkvx\" (UID: \"7691cf75-c4db-4e69-bd59-9bd4189e2702\") " pod="calico-apiserver/calico-apiserver-5c55569db5-8jkvx" Sep 13 00:05:22.580217 kubelet[2465]: I0913 00:05:22.580218 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af8d6313-8016-4b63-b286-8fb59033218e-config-volume\") pod \"coredns-7c65d6cfc9-42f6q\" (UID: \"af8d6313-8016-4b63-b286-8fb59033218e\") " pod="kube-system/coredns-7c65d6cfc9-42f6q" Sep 13 00:05:22.580384 kubelet[2465]: I0913 00:05:22.580237 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4p2\" (UniqueName: \"kubernetes.io/projected/37010d28-d42d-4197-85d1-065db74b5133-kube-api-access-4v4p2\") pod \"coredns-7c65d6cfc9-lmfsr\" (UID: \"37010d28-d42d-4197-85d1-065db74b5133\") " pod="kube-system/coredns-7c65d6cfc9-lmfsr" Sep 13 00:05:22.580384 kubelet[2465]: I0913 00:05:22.580254 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6xt\" (UniqueName: \"kubernetes.io/projected/616325c3-a8fc-46b9-a361-13ec6d383dfa-kube-api-access-pj6xt\") pod \"whisker-85f66bff6f-klv7r\" (UID: \"616325c3-a8fc-46b9-a361-13ec6d383dfa\") " pod="calico-system/whisker-85f66bff6f-klv7r" Sep 13 00:05:22.580384 kubelet[2465]: I0913 00:05:22.580272 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hw6j\" (UniqueName: \"kubernetes.io/projected/fa29da06-fe61-4e04-b4a0-a7f57e663902-kube-api-access-5hw6j\") pod \"goldmane-7988f88666-snwp2\" (UID: \"fa29da06-fe61-4e04-b4a0-a7f57e663902\") " pod="calico-system/goldmane-7988f88666-snwp2" Sep 13 00:05:22.580384 kubelet[2465]: I0913 00:05:22.580288 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kks5\" (UniqueName: \"kubernetes.io/projected/af8d6313-8016-4b63-b286-8fb59033218e-kube-api-access-6kks5\") pod \"coredns-7c65d6cfc9-42f6q\" (UID: \"af8d6313-8016-4b63-b286-8fb59033218e\") " pod="kube-system/coredns-7c65d6cfc9-42f6q" Sep 13 00:05:22.580384 kubelet[2465]: I0913 00:05:22.580305 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7691cf75-c4db-4e69-bd59-9bd4189e2702-calico-apiserver-certs\") pod \"calico-apiserver-5c55569db5-8jkvx\" (UID: \"7691cf75-c4db-4e69-bd59-9bd4189e2702\") " pod="calico-apiserver/calico-apiserver-5c55569db5-8jkvx" Sep 13 00:05:22.580497 kubelet[2465]: I0913 00:05:22.580323 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-backend-key-pair\") pod \"whisker-85f66bff6f-klv7r\" (UID: \"616325c3-a8fc-46b9-a361-13ec6d383dfa\") " pod="calico-system/whisker-85f66bff6f-klv7r" Sep 13 00:05:22.580497 kubelet[2465]: I0913 00:05:22.580339 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071f06bb-6417-46e6-bfae-280329d73932-tigera-ca-bundle\") pod \"calico-kube-controllers-5f5b759db5-cjrjx\" (UID: \"071f06bb-6417-46e6-bfae-280329d73932\") " pod="calico-system/calico-kube-controllers-5f5b759db5-cjrjx" Sep 13 00:05:22.580497 kubelet[2465]: I0913 00:05:22.580354 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa29da06-fe61-4e04-b4a0-a7f57e663902-config\") pod \"goldmane-7988f88666-snwp2\" (UID: \"fa29da06-fe61-4e04-b4a0-a7f57e663902\") " pod="calico-system/goldmane-7988f88666-snwp2" Sep 13 00:05:22.580497 kubelet[2465]: I0913 00:05:22.580369 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa29da06-fe61-4e04-b4a0-a7f57e663902-goldmane-ca-bundle\") pod \"goldmane-7988f88666-snwp2\" (UID: \"fa29da06-fe61-4e04-b4a0-a7f57e663902\") " pod="calico-system/goldmane-7988f88666-snwp2" Sep 13 00:05:22.580497 kubelet[2465]: I0913 00:05:22.580384 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fa29da06-fe61-4e04-b4a0-a7f57e663902-goldmane-key-pair\") pod \"goldmane-7988f88666-snwp2\" (UID: \"fa29da06-fe61-4e04-b4a0-a7f57e663902\") " pod="calico-system/goldmane-7988f88666-snwp2" Sep 13 00:05:22.580624 kubelet[2465]: I0913 00:05:22.580401 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5742ff1-d3ec-4bea-89be-500213d11650-calico-apiserver-certs\") pod \"calico-apiserver-5c55569db5-8wwmc\" (UID: \"f5742ff1-d3ec-4bea-89be-500213d11650\") " pod="calico-apiserver/calico-apiserver-5c55569db5-8wwmc" Sep 13 00:05:22.580624 kubelet[2465]: I0913 00:05:22.580420 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5tm\" (UniqueName: \"kubernetes.io/projected/f5742ff1-d3ec-4bea-89be-500213d11650-kube-api-access-4s5tm\") pod \"calico-apiserver-5c55569db5-8wwmc\" (UID: \"f5742ff1-d3ec-4bea-89be-500213d11650\") " pod="calico-apiserver/calico-apiserver-5c55569db5-8wwmc" Sep 13 00:05:22.580624 kubelet[2465]: I0913 00:05:22.580441 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7wr\" (UniqueName: \"kubernetes.io/projected/071f06bb-6417-46e6-bfae-280329d73932-kube-api-access-4g7wr\") pod \"calico-kube-controllers-5f5b759db5-cjrjx\" (UID: \"071f06bb-6417-46e6-bfae-280329d73932\") " pod="calico-system/calico-kube-controllers-5f5b759db5-cjrjx" Sep 13 00:05:22.580624 kubelet[2465]: I0913 00:05:22.580458 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-ca-bundle\") pod \"whisker-85f66bff6f-klv7r\" (UID: \"616325c3-a8fc-46b9-a361-13ec6d383dfa\") " pod="calico-system/whisker-85f66bff6f-klv7r" Sep 13 00:05:22.580624 kubelet[2465]: I0913 00:05:22.580473 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37010d28-d42d-4197-85d1-065db74b5133-config-volume\") pod \"coredns-7c65d6cfc9-lmfsr\" (UID: \"37010d28-d42d-4197-85d1-065db74b5133\") " pod="kube-system/coredns-7c65d6cfc9-lmfsr" Sep 13 00:05:22.811127 kubelet[2465]: E0913 00:05:22.810851 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:22.812651 containerd[1445]: time="2025-09-13T00:05:22.812607521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-42f6q,Uid:af8d6313-8016-4b63-b286-8fb59033218e,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:22.820319 containerd[1445]: time="2025-09-13T00:05:22.820135850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8wwmc,Uid:f5742ff1-d3ec-4bea-89be-500213d11650,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:05:22.837950 kubelet[2465]: E0913 00:05:22.837860 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:22.838462 containerd[1445]: time="2025-09-13T00:05:22.838428322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lmfsr,Uid:37010d28-d42d-4197-85d1-065db74b5133,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:22.843326 containerd[1445]: time="2025-09-13T00:05:22.842880278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5b759db5-cjrjx,Uid:071f06bb-6417-46e6-bfae-280329d73932,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:22.847054 containerd[1445]: time="2025-09-13T00:05:22.847009388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8jkvx,Uid:7691cf75-c4db-4e69-bd59-9bd4189e2702,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:05:22.855977 containerd[1445]: time="2025-09-13T00:05:22.855237129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85f66bff6f-klv7r,Uid:616325c3-a8fc-46b9-a361-13ec6d383dfa,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:22.858079 containerd[1445]: time="2025-09-13T00:05:22.857856534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-snwp2,Uid:fa29da06-fe61-4e04-b4a0-a7f57e663902,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:22.995892 systemd[1]: Created slice kubepods-besteffort-pod7d15f87e_d4b3_4e20_9451_06b0fba27ad4.slice - libcontainer container kubepods-besteffort-pod7d15f87e_d4b3_4e20_9451_06b0fba27ad4.slice. Sep 13 00:05:23.004880 containerd[1445]: time="2025-09-13T00:05:23.004601717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5t74,Uid:7d15f87e-d4b3-4e20-9451-06b0fba27ad4,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:23.046571 containerd[1445]: time="2025-09-13T00:05:23.046316361Z" level=error msg="Failed to destroy network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.047024 containerd[1445]: time="2025-09-13T00:05:23.046984652Z" level=error msg="encountered an error cleaning up failed sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.047097 containerd[1445]: time="2025-09-13T00:05:23.047062493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8wwmc,Uid:f5742ff1-d3ec-4bea-89be-500213d11650,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.047812 containerd[1445]: time="2025-09-13T00:05:23.047605102Z" level=error msg="Failed to destroy network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.048252 containerd[1445]: time="2025-09-13T00:05:23.048116630Z" level=error msg="encountered an error cleaning up failed sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.048389 containerd[1445]: time="2025-09-13T00:05:23.048266913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-42f6q,Uid:af8d6313-8016-4b63-b286-8fb59033218e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.049032 kubelet[2465]: E0913 00:05:23.048984 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.049245 containerd[1445]: time="2025-09-13T00:05:23.049196808Z" level=error msg="Failed to destroy network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.049348 kubelet[2465]: E0913 00:05:23.049317 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.050075 kubelet[2465]: E0913 00:05:23.050022 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-42f6q" Sep 13 00:05:23.050365 containerd[1445]: time="2025-09-13T00:05:23.050337267Z" level=error msg="encountered an error cleaning up failed sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.050417 containerd[1445]: time="2025-09-13T00:05:23.050379627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5b759db5-cjrjx,Uid:071f06bb-6417-46e6-bfae-280329d73932,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.050735 kubelet[2465]: E0913 00:05:23.050557 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.050735 kubelet[2465]: E0913 00:05:23.050668 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f5b759db5-cjrjx" Sep 13 00:05:23.054435 containerd[1445]: time="2025-09-13T00:05:23.054394173Z" level=error msg="Failed to destroy network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.054930 containerd[1445]: time="2025-09-13T00:05:23.054873941Z" level=error msg="Failed to destroy network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.054996 containerd[1445]: time="2025-09-13T00:05:23.054939182Z" level=error msg="encountered an error cleaning up failed sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.055024 containerd[1445]: time="2025-09-13T00:05:23.054991343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lmfsr,Uid:37010d28-d42d-4197-85d1-065db74b5133,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.055269 containerd[1445]: time="2025-09-13T00:05:23.055238667Z" level=error msg="encountered an error cleaning up failed sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.055315 containerd[1445]: time="2025-09-13T00:05:23.055284668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8jkvx,Uid:7691cf75-c4db-4e69-bd59-9bd4189e2702,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.055557 kubelet[2465]: E0913 00:05:23.055415 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.055557 kubelet[2465]: E0913 00:05:23.055472 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c55569db5-8jkvx" Sep 13 00:05:23.056659 kubelet[2465]: E0913 00:05:23.056337 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c55569db5-8wwmc" Sep 13 00:05:23.056769 kubelet[2465]: E0913 00:05:23.056749 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c55569db5-8wwmc" Sep 13 00:05:23.056899 kubelet[2465]: E0913 00:05:23.056452 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c55569db5-8jkvx" Sep 13 00:05:23.057027 kubelet[2465]: E0913 00:05:23.056998 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c55569db5-8jkvx_calico-apiserver(7691cf75-c4db-4e69-bd59-9bd4189e2702)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c55569db5-8jkvx_calico-apiserver(7691cf75-c4db-4e69-bd59-9bd4189e2702)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c55569db5-8jkvx" podUID="7691cf75-c4db-4e69-bd59-9bd4189e2702" Sep 13 00:05:23.057245 kubelet[2465]: E0913 00:05:23.056649 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.057245 kubelet[2465]: E0913 00:05:23.057176 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lmfsr" Sep 13 00:05:23.057245 kubelet[2465]: E0913 00:05:23.057191 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lmfsr" Sep 13 00:05:23.057358 kubelet[2465]: E0913 00:05:23.057220 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lmfsr_kube-system(37010d28-d42d-4197-85d1-065db74b5133)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lmfsr_kube-system(37010d28-d42d-4197-85d1-065db74b5133)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lmfsr" podUID="37010d28-d42d-4197-85d1-065db74b5133" Sep 13 00:05:23.060070 kubelet[2465]: E0913 00:05:23.056852 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c55569db5-8wwmc_calico-apiserver(f5742ff1-d3ec-4bea-89be-500213d11650)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c55569db5-8wwmc_calico-apiserver(f5742ff1-d3ec-4bea-89be-500213d11650)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c55569db5-8wwmc" podUID="f5742ff1-d3ec-4bea-89be-500213d11650" Sep 13 00:05:23.061319 kubelet[2465]: E0913 00:05:23.061103 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-42f6q" Sep 13 00:05:23.061545 kubelet[2465]: E0913 00:05:23.061423 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-42f6q_kube-system(af8d6313-8016-4b63-b286-8fb59033218e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-42f6q_kube-system(af8d6313-8016-4b63-b286-8fb59033218e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-42f6q" podUID="af8d6313-8016-4b63-b286-8fb59033218e" Sep 13 00:05:23.062328 containerd[1445]: time="2025-09-13T00:05:23.062283503Z" level=error msg="Failed to destroy network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.062999 containerd[1445]: time="2025-09-13T00:05:23.062965074Z" level=error msg="encountered an error cleaning up failed sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.063084 containerd[1445]: time="2025-09-13T00:05:23.063026075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-snwp2,Uid:fa29da06-fe61-4e04-b4a0-a7f57e663902,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.063287 kubelet[2465]: E0913 00:05:23.063260 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.063546 kubelet[2465]: E0913 00:05:23.063523 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-snwp2" Sep 13 00:05:23.063767 kubelet[2465]: E0913 00:05:23.063743 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-snwp2" Sep 13 00:05:23.064509 kubelet[2465]: E0913 00:05:23.064480 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-snwp2_calico-system(fa29da06-fe61-4e04-b4a0-a7f57e663902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-snwp2_calico-system(fa29da06-fe61-4e04-b4a0-a7f57e663902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-snwp2" podUID="fa29da06-fe61-4e04-b4a0-a7f57e663902" Sep 13 00:05:23.067136 kubelet[2465]: E0913 00:05:23.060556 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f5b759db5-cjrjx" Sep 13 00:05:23.067371 kubelet[2465]: E0913 00:05:23.067333 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f5b759db5-cjrjx_calico-system(071f06bb-6417-46e6-bfae-280329d73932)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f5b759db5-cjrjx_calico-system(071f06bb-6417-46e6-bfae-280329d73932)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f5b759db5-cjrjx" podUID="071f06bb-6417-46e6-bfae-280329d73932" Sep 13 00:05:23.069831 containerd[1445]: time="2025-09-13T00:05:23.069778146Z" level=error msg="Failed to destroy network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.070143 containerd[1445]: time="2025-09-13T00:05:23.070117271Z" level=error msg="encountered an error cleaning up failed sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.070184 containerd[1445]: time="2025-09-13T00:05:23.070167712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85f66bff6f-klv7r,Uid:616325c3-a8fc-46b9-a361-13ec6d383dfa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.071313 kubelet[2465]: E0913 00:05:23.071215 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.071313 kubelet[2465]: E0913 00:05:23.071264 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85f66bff6f-klv7r" Sep 13 00:05:23.071313 kubelet[2465]: E0913 00:05:23.071281 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85f66bff6f-klv7r" Sep 13 00:05:23.071594 kubelet[2465]: E0913 00:05:23.071470 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85f66bff6f-klv7r_calico-system(616325c3-a8fc-46b9-a361-13ec6d383dfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85f66bff6f-klv7r_calico-system(616325c3-a8fc-46b9-a361-13ec6d383dfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85f66bff6f-klv7r" podUID="616325c3-a8fc-46b9-a361-13ec6d383dfa" Sep 13 00:05:23.093738 containerd[1445]: time="2025-09-13T00:05:23.093608816Z" level=error msg="Failed to destroy network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.094416 containerd[1445]: time="2025-09-13T00:05:23.094047744Z" level=error msg="encountered an error cleaning up failed sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.094416 containerd[1445]: time="2025-09-13T00:05:23.094104585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5t74,Uid:7d15f87e-d4b3-4e20-9451-06b0fba27ad4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.094525 kubelet[2465]: E0913 00:05:23.094299 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.094525 kubelet[2465]: E0913 00:05:23.094345 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5t74" Sep 13 00:05:23.094525 kubelet[2465]: E0913 00:05:23.094373 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5t74" Sep 13 00:05:23.094609 kubelet[2465]: E0913 00:05:23.094412 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c5t74_calico-system(7d15f87e-d4b3-4e20-9451-06b0fba27ad4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c5t74_calico-system(7d15f87e-d4b3-4e20-9451-06b0fba27ad4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c5t74" podUID="7d15f87e-d4b3-4e20-9451-06b0fba27ad4" Sep 13 00:05:23.111355 kubelet[2465]: I0913 00:05:23.110491 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:23.111495 containerd[1445]: time="2025-09-13T00:05:23.111060303Z" level=info msg="StopPodSandbox for \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\"" Sep 13 00:05:23.111495 containerd[1445]: time="2025-09-13T00:05:23.111223145Z" level=info msg="Ensure that sandbox 9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3 in task-service has been cleanup successfully" Sep 13 00:05:23.114621 kubelet[2465]: I0913 00:05:23.114220 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:23.115410 containerd[1445]: time="2025-09-13T00:05:23.114961407Z" level=info msg="StopPodSandbox for \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\"" Sep 13 00:05:23.115410 containerd[1445]: time="2025-09-13T00:05:23.115139009Z" level=info msg="Ensure that sandbox 0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e in task-service has been cleanup successfully" Sep 13 00:05:23.119992 containerd[1445]: time="2025-09-13T00:05:23.119958249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:05:23.121782 kubelet[2465]: I0913 00:05:23.121680 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:23.122626 containerd[1445]: time="2025-09-13T00:05:23.122477890Z" level=info msg="StopPodSandbox for \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\"" Sep 13 00:05:23.122693 containerd[1445]: time="2025-09-13T00:05:23.122662213Z" level=info msg="Ensure that sandbox 533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77 in task-service has been cleanup successfully" Sep 13 00:05:23.123260 kubelet[2465]: I0913 00:05:23.123226 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:23.123707 containerd[1445]: time="2025-09-13T00:05:23.123666389Z" level=info msg="StopPodSandbox for \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\"" Sep 13 00:05:23.125080 containerd[1445]: time="2025-09-13T00:05:23.123813512Z" level=info msg="Ensure that sandbox 6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82 in task-service has been cleanup successfully" Sep 13 00:05:23.127569 kubelet[2465]: I0913 00:05:23.127539 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:23.127998 containerd[1445]: time="2025-09-13T00:05:23.127970500Z" level=info msg="StopPodSandbox for \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\"" Sep 13 00:05:23.128232 containerd[1445]: time="2025-09-13T00:05:23.128211464Z" level=info msg="Ensure that sandbox 583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2 in task-service has been cleanup successfully" Sep 13 00:05:23.130508 kubelet[2465]: I0913 00:05:23.130473 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:23.132350 containerd[1445]: time="2025-09-13T00:05:23.132308611Z" level=info msg="StopPodSandbox for \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\"" Sep 13 00:05:23.132549 containerd[1445]: time="2025-09-13T00:05:23.132524815Z" level=info msg="Ensure that sandbox b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1 in task-service has been cleanup successfully" Sep 13 00:05:23.134588 kubelet[2465]: I0913 00:05:23.134557 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:23.135838 kubelet[2465]: I0913 00:05:23.135716 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:23.136712 containerd[1445]: time="2025-09-13T00:05:23.136377838Z" level=info msg="StopPodSandbox for \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\"" Sep 13 00:05:23.136712 containerd[1445]: time="2025-09-13T00:05:23.136548681Z" level=info msg="Ensure that sandbox 039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063 in task-service has been cleanup successfully" Sep 13 00:05:23.136712 containerd[1445]: time="2025-09-13T00:05:23.136616962Z" level=info msg="StopPodSandbox for \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\"" Sep 13 00:05:23.136839 containerd[1445]: time="2025-09-13T00:05:23.136811685Z" level=info msg="Ensure that sandbox 7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07 in task-service has been cleanup successfully" Sep 13 00:05:23.163476 containerd[1445]: time="2025-09-13T00:05:23.163401121Z" level=error msg="StopPodSandbox for \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\" failed" error="failed to destroy network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.163832 kubelet[2465]: E0913 00:05:23.163767 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:23.164230 kubelet[2465]: E0913 00:05:23.163850 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3"} Sep 13 00:05:23.164230 kubelet[2465]: E0913 00:05:23.163932 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af8d6313-8016-4b63-b286-8fb59033218e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.164230 kubelet[2465]: E0913 00:05:23.164017 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af8d6313-8016-4b63-b286-8fb59033218e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-42f6q" podUID="af8d6313-8016-4b63-b286-8fb59033218e" Sep 13 00:05:23.186536 containerd[1445]: time="2025-09-13T00:05:23.186448339Z" level=error msg="StopPodSandbox for \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\" failed" error="failed to destroy network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.186884 kubelet[2465]: E0913 00:05:23.186677 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:23.186884 kubelet[2465]: E0913 00:05:23.186869 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82"} Sep 13 00:05:23.186977 kubelet[2465]: E0913 00:05:23.186903 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5742ff1-d3ec-4bea-89be-500213d11650\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.187072 kubelet[2465]: E0913 00:05:23.187030 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5742ff1-d3ec-4bea-89be-500213d11650\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c55569db5-8wwmc" podUID="f5742ff1-d3ec-4bea-89be-500213d11650" Sep 13 00:05:23.187882 containerd[1445]: time="2025-09-13T00:05:23.187797761Z" level=error msg="StopPodSandbox for \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\" failed" error="failed to destroy network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.188340 kubelet[2465]: E0913 00:05:23.188260 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:23.188394 kubelet[2465]: E0913 00:05:23.188376 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063"} Sep 13 00:05:23.188528 kubelet[2465]: E0913 00:05:23.188405 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"071f06bb-6417-46e6-bfae-280329d73932\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.188599 kubelet[2465]: E0913 00:05:23.188532 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"071f06bb-6417-46e6-bfae-280329d73932\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f5b759db5-cjrjx" podUID="071f06bb-6417-46e6-bfae-280329d73932" Sep 13 00:05:23.189795 containerd[1445]: time="2025-09-13T00:05:23.189742833Z" level=error msg="StopPodSandbox for \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\" failed" error="failed to destroy network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.189965 kubelet[2465]: E0913 00:05:23.189925 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:23.190451 kubelet[2465]: E0913 00:05:23.190413 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07"} Sep 13 00:05:23.190511 kubelet[2465]: E0913 00:05:23.190454 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa29da06-fe61-4e04-b4a0-a7f57e663902\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.190511 kubelet[2465]: E0913 00:05:23.190476 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa29da06-fe61-4e04-b4a0-a7f57e663902\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-snwp2" podUID="fa29da06-fe61-4e04-b4a0-a7f57e663902" Sep 13 00:05:23.196541 containerd[1445]: time="2025-09-13T00:05:23.196473783Z" level=error msg="StopPodSandbox for \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\" failed" error="failed to destroy network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.196784 kubelet[2465]: E0913 00:05:23.196697 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:23.196839 kubelet[2465]: E0913 00:05:23.196798 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1"} Sep 13 00:05:23.196839 kubelet[2465]: E0913 00:05:23.196828 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"616325c3-a8fc-46b9-a361-13ec6d383dfa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.196926 kubelet[2465]: E0913 00:05:23.196847 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"616325c3-a8fc-46b9-a361-13ec6d383dfa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85f66bff6f-klv7r" podUID="616325c3-a8fc-46b9-a361-13ec6d383dfa" Sep 13 00:05:23.200849 containerd[1445]: time="2025-09-13T00:05:23.200251245Z" level=error msg="StopPodSandbox for \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\" failed" error="failed to destroy network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.200912 kubelet[2465]: E0913 00:05:23.200444 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:23.200912 kubelet[2465]: E0913 00:05:23.200481 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2"} Sep 13 00:05:23.200912 kubelet[2465]: E0913 00:05:23.200507 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d15f87e-d4b3-4e20-9451-06b0fba27ad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.200912 kubelet[2465]: E0913 00:05:23.200537 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d15f87e-d4b3-4e20-9451-06b0fba27ad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c5t74" podUID="7d15f87e-d4b3-4e20-9451-06b0fba27ad4" Sep 13 00:05:23.202223 containerd[1445]: time="2025-09-13T00:05:23.202186557Z" level=error msg="StopPodSandbox for \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\" failed" error="failed to destroy network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.202587 kubelet[2465]: E0913 00:05:23.202544 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:23.202665 kubelet[2465]: E0913 00:05:23.202604 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77"} Sep 13 00:05:23.202665 kubelet[2465]: E0913 00:05:23.202632 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37010d28-d42d-4197-85d1-065db74b5133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.202665 kubelet[2465]: E0913 00:05:23.202650 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37010d28-d42d-4197-85d1-065db74b5133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lmfsr" podUID="37010d28-d42d-4197-85d1-065db74b5133" Sep 13 00:05:23.209511 containerd[1445]: time="2025-09-13T00:05:23.209470317Z" level=error msg="StopPodSandbox for \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\" failed" error="failed to destroy network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:23.209739 kubelet[2465]: E0913 00:05:23.209706 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:23.209800 kubelet[2465]: E0913 00:05:23.209752 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e"} Sep 13 00:05:23.209800 kubelet[2465]: E0913 00:05:23.209784 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7691cf75-c4db-4e69-bd59-9bd4189e2702\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:23.209862 kubelet[2465]: E0913 00:05:23.209805 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7691cf75-c4db-4e69-bd59-9bd4189e2702\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c55569db5-8jkvx" podUID="7691cf75-c4db-4e69-bd59-9bd4189e2702" Sep 13 00:05:27.244158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946445697.mount: Deactivated successfully. Sep 13 00:05:27.595491 containerd[1445]: time="2025-09-13T00:05:27.595329581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:27.603951 containerd[1445]: time="2025-09-13T00:05:27.603877821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:05:27.608870 containerd[1445]: time="2025-09-13T00:05:27.608804651Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:27.623587 containerd[1445]: time="2025-09-13T00:05:27.622795208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:27.623587 containerd[1445]: time="2025-09-13T00:05:27.623502938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.503498129s" Sep 13 00:05:27.623587 containerd[1445]: time="2025-09-13T00:05:27.623548059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:05:27.650696 containerd[1445]: time="2025-09-13T00:05:27.649054659Z" level=info msg="CreateContainer within sandbox \"735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:05:27.725194 containerd[1445]: time="2025-09-13T00:05:27.725135252Z" level=info msg="CreateContainer within sandbox \"735fe8b9bd6a4372c08bfb4e734d2a79af88a97a5b22bbf943de842c1cbd8581\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"269b3ddbd88044a1c48a31ec44b12154cb0adaebdd78c401bc99ee27f4fdf0c7\"" Sep 13 00:05:27.725715 containerd[1445]: time="2025-09-13T00:05:27.725670699Z" level=info msg="StartContainer for \"269b3ddbd88044a1c48a31ec44b12154cb0adaebdd78c401bc99ee27f4fdf0c7\"" Sep 13 00:05:27.780276 systemd[1]: Started cri-containerd-269b3ddbd88044a1c48a31ec44b12154cb0adaebdd78c401bc99ee27f4fdf0c7.scope - libcontainer container 269b3ddbd88044a1c48a31ec44b12154cb0adaebdd78c401bc99ee27f4fdf0c7. Sep 13 00:05:27.811189 containerd[1445]: time="2025-09-13T00:05:27.811147785Z" level=info msg="StartContainer for \"269b3ddbd88044a1c48a31ec44b12154cb0adaebdd78c401bc99ee27f4fdf0c7\" returns successfully" Sep 13 00:05:27.938366 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:05:27.938806 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:05:28.060293 containerd[1445]: time="2025-09-13T00:05:28.060249311Z" level=info msg="StopPodSandbox for \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\"" Sep 13 00:05:28.172674 kubelet[2465]: I0913 00:05:28.171988 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g29x9" podStartSLOduration=1.056805224 podStartE2EDuration="13.171969233s" podCreationTimestamp="2025-09-13 00:05:15 +0000 UTC" firstStartedPulling="2025-09-13 00:05:15.526988672 +0000 UTC m=+20.624635155" lastFinishedPulling="2025-09-13 00:05:27.642152641 +0000 UTC m=+32.739799164" observedRunningTime="2025-09-13 00:05:28.170847617 +0000 UTC m=+33.268494100" watchObservedRunningTime="2025-09-13 00:05:28.171969233 +0000 UTC m=+33.269615756" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.150 [INFO][3763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.150 [INFO][3763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" iface="eth0" netns="/var/run/netns/cni-bf1341e4-3ace-ccc7-14c2-725045c8e8da" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.151 [INFO][3763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" iface="eth0" netns="/var/run/netns/cni-bf1341e4-3ace-ccc7-14c2-725045c8e8da" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.151 [INFO][3763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" iface="eth0" netns="/var/run/netns/cni-bf1341e4-3ace-ccc7-14c2-725045c8e8da" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.152 [INFO][3763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.152 [INFO][3763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.220 [INFO][3779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.220 [INFO][3779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.220 [INFO][3779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.230 [WARNING][3779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.230 [INFO][3779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.231 [INFO][3779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:28.235272 containerd[1445]: 2025-09-13 00:05:28.233 [INFO][3763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:28.235758 containerd[1445]: time="2025-09-13T00:05:28.235337456Z" level=info msg="TearDown network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\" successfully" Sep 13 00:05:28.235758 containerd[1445]: time="2025-09-13T00:05:28.235363336Z" level=info msg="StopPodSandbox for \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\" returns successfully" Sep 13 00:05:28.245211 systemd[1]: run-netns-cni\x2dbf1341e4\x2d3ace\x2dccc7\x2d14c2\x2d725045c8e8da.mount: Deactivated successfully. Sep 13 00:05:28.419874 kubelet[2465]: I0913 00:05:28.419780 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-ca-bundle\") pod \"616325c3-a8fc-46b9-a361-13ec6d383dfa\" (UID: \"616325c3-a8fc-46b9-a361-13ec6d383dfa\") " Sep 13 00:05:28.419874 kubelet[2465]: I0913 00:05:28.419832 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-backend-key-pair\") pod \"616325c3-a8fc-46b9-a361-13ec6d383dfa\" (UID: \"616325c3-a8fc-46b9-a361-13ec6d383dfa\") " Sep 13 00:05:28.419874 kubelet[2465]: I0913 00:05:28.419855 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj6xt\" (UniqueName: \"kubernetes.io/projected/616325c3-a8fc-46b9-a361-13ec6d383dfa-kube-api-access-pj6xt\") pod \"616325c3-a8fc-46b9-a361-13ec6d383dfa\" (UID: \"616325c3-a8fc-46b9-a361-13ec6d383dfa\") " Sep 13 00:05:28.424495 kubelet[2465]: I0913 00:05:28.424418 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "616325c3-a8fc-46b9-a361-13ec6d383dfa" (UID: "616325c3-a8fc-46b9-a361-13ec6d383dfa"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:05:28.429927 kubelet[2465]: I0913 00:05:28.429875 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616325c3-a8fc-46b9-a361-13ec6d383dfa-kube-api-access-pj6xt" (OuterVolumeSpecName: "kube-api-access-pj6xt") pod "616325c3-a8fc-46b9-a361-13ec6d383dfa" (UID: "616325c3-a8fc-46b9-a361-13ec6d383dfa"). InnerVolumeSpecName "kube-api-access-pj6xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:05:28.430063 systemd[1]: var-lib-kubelet-pods-616325c3\x2da8fc\x2d46b9\x2da361\x2d13ec6d383dfa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpj6xt.mount: Deactivated successfully. Sep 13 00:05:28.433080 kubelet[2465]: I0913 00:05:28.432994 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "616325c3-a8fc-46b9-a361-13ec6d383dfa" (UID: "616325c3-a8fc-46b9-a361-13ec6d383dfa"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:05:28.435141 systemd[1]: var-lib-kubelet-pods-616325c3\x2da8fc\x2d46b9\x2da361\x2d13ec6d383dfa-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:05:28.520849 kubelet[2465]: I0913 00:05:28.520701 2465 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:05:28.520849 kubelet[2465]: I0913 00:05:28.520746 2465 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/616325c3-a8fc-46b9-a361-13ec6d383dfa-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:05:28.520849 kubelet[2465]: I0913 00:05:28.520756 2465 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj6xt\" (UniqueName: \"kubernetes.io/projected/616325c3-a8fc-46b9-a361-13ec6d383dfa-kube-api-access-pj6xt\") on node \"localhost\" DevicePath \"\"" Sep 13 00:05:29.000862 systemd[1]: Removed slice kubepods-besteffort-pod616325c3_a8fc_46b9_a361_13ec6d383dfa.slice - libcontainer container kubepods-besteffort-pod616325c3_a8fc_46b9_a361_13ec6d383dfa.slice. Sep 13 00:05:29.151878 kubelet[2465]: I0913 00:05:29.151849 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:05:29.219492 systemd[1]: Created slice kubepods-besteffort-pod36607c37_682a_4209_aca5_1f7336821d99.slice - libcontainer container kubepods-besteffort-pod36607c37_682a_4209_aca5_1f7336821d99.slice. Sep 13 00:05:29.325119 kubelet[2465]: I0913 00:05:29.324753 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tql8q\" (UniqueName: \"kubernetes.io/projected/36607c37-682a-4209-aca5-1f7336821d99-kube-api-access-tql8q\") pod \"whisker-6df9688c44-grk7r\" (UID: \"36607c37-682a-4209-aca5-1f7336821d99\") " pod="calico-system/whisker-6df9688c44-grk7r" Sep 13 00:05:29.325119 kubelet[2465]: I0913 00:05:29.324820 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36607c37-682a-4209-aca5-1f7336821d99-whisker-backend-key-pair\") pod \"whisker-6df9688c44-grk7r\" (UID: \"36607c37-682a-4209-aca5-1f7336821d99\") " pod="calico-system/whisker-6df9688c44-grk7r" Sep 13 00:05:29.325119 kubelet[2465]: I0913 00:05:29.324842 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36607c37-682a-4209-aca5-1f7336821d99-whisker-ca-bundle\") pod \"whisker-6df9688c44-grk7r\" (UID: \"36607c37-682a-4209-aca5-1f7336821d99\") " pod="calico-system/whisker-6df9688c44-grk7r" Sep 13 00:05:29.523698 containerd[1445]: time="2025-09-13T00:05:29.523651086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df9688c44-grk7r,Uid:36607c37-682a-4209-aca5-1f7336821d99,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:29.626078 kernel: bpftool[3952]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:05:29.685497 systemd-networkd[1386]: cali5370d0fe985: Link UP Sep 13 00:05:29.686738 systemd-networkd[1386]: cali5370d0fe985: Gained carrier Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.589 [INFO][3911] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.607 [INFO][3911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6df9688c44--grk7r-eth0 whisker-6df9688c44- calico-system 36607c37-682a-4209-aca5-1f7336821d99 948 0 2025-09-13 00:05:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6df9688c44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6df9688c44-grk7r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5370d0fe985 [] [] }} ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.607 [INFO][3911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-eth0" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.638 [INFO][3942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" HandleID="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Workload="localhost-k8s-whisker--6df9688c44--grk7r-eth0" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.638 [INFO][3942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" HandleID="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Workload="localhost-k8s-whisker--6df9688c44--grk7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6df9688c44-grk7r", "timestamp":"2025-09-13 00:05:29.638426797 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.638 [INFO][3942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.638 [INFO][3942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.638 [INFO][3942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.650 [INFO][3942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.656 [INFO][3942] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.661 [INFO][3942] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.662 [INFO][3942] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.665 [INFO][3942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.665 [INFO][3942] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.667 [INFO][3942] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.672 [INFO][3942] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.676 [INFO][3942] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.677 [INFO][3942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" host="localhost" Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.677 [INFO][3942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:29.706108 containerd[1445]: 2025-09-13 00:05:29.677 [INFO][3942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" HandleID="k8s-pod-network.01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Workload="localhost-k8s-whisker--6df9688c44--grk7r-eth0" Sep 13 00:05:29.706656 containerd[1445]: 2025-09-13 00:05:29.679 [INFO][3911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6df9688c44--grk7r-eth0", GenerateName:"whisker-6df9688c44-", Namespace:"calico-system", SelfLink:"", UID:"36607c37-682a-4209-aca5-1f7336821d99", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df9688c44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6df9688c44-grk7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5370d0fe985", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:29.706656 containerd[1445]: 2025-09-13 00:05:29.679 [INFO][3911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-eth0" Sep 13 00:05:29.706656 containerd[1445]: 2025-09-13 00:05:29.679 [INFO][3911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5370d0fe985 ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-eth0" Sep 13 00:05:29.706656 containerd[1445]: 2025-09-13 00:05:29.687 [INFO][3911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-eth0" Sep 13 00:05:29.706656 containerd[1445]: 2025-09-13 00:05:29.687 [INFO][3911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6df9688c44--grk7r-eth0", GenerateName:"whisker-6df9688c44-", Namespace:"calico-system", SelfLink:"", UID:"36607c37-682a-4209-aca5-1f7336821d99", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df9688c44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e", Pod:"whisker-6df9688c44-grk7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5370d0fe985", MAC:"82:3a:75:1e:a2:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:29.706656 containerd[1445]: 2025-09-13 00:05:29.701 [INFO][3911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e" Namespace="calico-system" Pod="whisker-6df9688c44-grk7r" WorkloadEndpoint="localhost-k8s-whisker--6df9688c44--grk7r-eth0" Sep 13 00:05:29.725704 containerd[1445]: time="2025-09-13T00:05:29.725340421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:29.725704 containerd[1445]: time="2025-09-13T00:05:29.725445343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:29.725704 containerd[1445]: time="2025-09-13T00:05:29.725511704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:29.726080 containerd[1445]: time="2025-09-13T00:05:29.725687626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:29.745236 systemd[1]: Started cri-containerd-01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e.scope - libcontainer container 01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e. Sep 13 00:05:29.760244 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:29.782938 containerd[1445]: time="2025-09-13T00:05:29.782878419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df9688c44-grk7r,Uid:36607c37-682a-4209-aca5-1f7336821d99,Namespace:calico-system,Attempt:0,} returns sandbox id \"01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e\"" Sep 13 00:05:29.785264 containerd[1445]: time="2025-09-13T00:05:29.785228730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:05:29.821417 systemd-networkd[1386]: vxlan.calico: Link UP Sep 13 00:05:29.821426 systemd-networkd[1386]: vxlan.calico: Gained carrier Sep 13 00:05:30.992455 kubelet[2465]: I0913 00:05:30.992401 2465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616325c3-a8fc-46b9-a361-13ec6d383dfa" path="/var/lib/kubelet/pods/616325c3-a8fc-46b9-a361-13ec6d383dfa/volumes" Sep 13 00:05:31.076905 containerd[1445]: time="2025-09-13T00:05:31.076830804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:31.077543 containerd[1445]: time="2025-09-13T00:05:31.077501292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:05:31.078407 containerd[1445]: time="2025-09-13T00:05:31.078357383Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:31.080549 containerd[1445]: time="2025-09-13T00:05:31.080508609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:31.081517 containerd[1445]: time="2025-09-13T00:05:31.081219898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.295956608s" Sep 13 00:05:31.081517 containerd[1445]: time="2025-09-13T00:05:31.081253298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:05:31.083921 containerd[1445]: time="2025-09-13T00:05:31.083882091Z" level=info msg="CreateContainer within sandbox \"01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:05:31.103934 containerd[1445]: time="2025-09-13T00:05:31.103886578Z" level=info msg="CreateContainer within sandbox \"01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cb71f0143879ac6cf2200aa29a9a8f87322d990d53da8eb53a4db80adaafd98a\"" Sep 13 00:05:31.109523 containerd[1445]: time="2025-09-13T00:05:31.109484567Z" level=info msg="StartContainer for \"cb71f0143879ac6cf2200aa29a9a8f87322d990d53da8eb53a4db80adaafd98a\"" Sep 13 00:05:31.147258 systemd[1]: Started cri-containerd-cb71f0143879ac6cf2200aa29a9a8f87322d990d53da8eb53a4db80adaafd98a.scope - libcontainer container cb71f0143879ac6cf2200aa29a9a8f87322d990d53da8eb53a4db80adaafd98a. Sep 13 00:05:31.154170 systemd-networkd[1386]: cali5370d0fe985: Gained IPv6LL Sep 13 00:05:31.179971 containerd[1445]: time="2025-09-13T00:05:31.179919676Z" level=info msg="StartContainer for \"cb71f0143879ac6cf2200aa29a9a8f87322d990d53da8eb53a4db80adaafd98a\" returns successfully" Sep 13 00:05:31.182553 containerd[1445]: time="2025-09-13T00:05:31.181171331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:05:31.602163 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Sep 13 00:05:33.034363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675509288.mount: Deactivated successfully. Sep 13 00:05:33.122599 containerd[1445]: time="2025-09-13T00:05:33.122537897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:33.123165 containerd[1445]: time="2025-09-13T00:05:33.123128384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:05:33.124164 containerd[1445]: time="2025-09-13T00:05:33.124111836Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:33.145906 containerd[1445]: time="2025-09-13T00:05:33.145773727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.964560716s" Sep 13 00:05:33.145906 containerd[1445]: time="2025-09-13T00:05:33.145828408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:05:33.146247 containerd[1445]: time="2025-09-13T00:05:33.145983610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:33.150432 containerd[1445]: time="2025-09-13T00:05:33.150395701Z" level=info msg="CreateContainer within sandbox \"01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:05:33.162621 containerd[1445]: time="2025-09-13T00:05:33.162577082Z" level=info msg="CreateContainer within sandbox \"01dc48a6c23f5860a37f32a7d6f946bf060541ac66b01b3e3d70ca160a9a5c1e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"51a39ab4581b5aacee55dfc3714b003742a63daa47e11a1f1bab0edc175c5d9f\"" Sep 13 00:05:33.163372 containerd[1445]: time="2025-09-13T00:05:33.163022887Z" level=info msg="StartContainer for \"51a39ab4581b5aacee55dfc3714b003742a63daa47e11a1f1bab0edc175c5d9f\"" Sep 13 00:05:33.198219 systemd[1]: Started cri-containerd-51a39ab4581b5aacee55dfc3714b003742a63daa47e11a1f1bab0edc175c5d9f.scope - libcontainer container 51a39ab4581b5aacee55dfc3714b003742a63daa47e11a1f1bab0edc175c5d9f. Sep 13 00:05:33.236895 containerd[1445]: time="2025-09-13T00:05:33.236832104Z" level=info msg="StartContainer for \"51a39ab4581b5aacee55dfc3714b003742a63daa47e11a1f1bab0edc175c5d9f\" returns successfully" Sep 13 00:05:33.990405 containerd[1445]: time="2025-09-13T00:05:33.989983127Z" level=info msg="StopPodSandbox for \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\"" Sep 13 00:05:33.990405 containerd[1445]: time="2025-09-13T00:05:33.990151409Z" level=info msg="StopPodSandbox for \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\"" Sep 13 00:05:33.991980 containerd[1445]: time="2025-09-13T00:05:33.991418984Z" level=info msg="StopPodSandbox for \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\"" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.066 [INFO][4201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.066 [INFO][4201] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" iface="eth0" netns="/var/run/netns/cni-f2e5e27d-147a-4610-9843-328f67f9dc1d" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.067 [INFO][4201] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" iface="eth0" netns="/var/run/netns/cni-f2e5e27d-147a-4610-9843-328f67f9dc1d" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.067 [INFO][4201] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" iface="eth0" netns="/var/run/netns/cni-f2e5e27d-147a-4610-9843-328f67f9dc1d" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.067 [INFO][4201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.067 [INFO][4201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.098 [INFO][4224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.098 [INFO][4224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.098 [INFO][4224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.110 [WARNING][4224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.110 [INFO][4224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.111 [INFO][4224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:34.115133 containerd[1445]: 2025-09-13 00:05:34.113 [INFO][4201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:34.119638 containerd[1445]: time="2025-09-13T00:05:34.117712492Z" level=info msg="TearDown network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\" successfully" Sep 13 00:05:34.119638 containerd[1445]: time="2025-09-13T00:05:34.117749772Z" level=info msg="StopPodSandbox for \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\" returns successfully" Sep 13 00:05:34.119638 containerd[1445]: time="2025-09-13T00:05:34.118589741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lmfsr,Uid:37010d28-d42d-4197-85d1-065db74b5133,Namespace:kube-system,Attempt:1,}" Sep 13 00:05:34.118888 systemd[1]: run-netns-cni\x2df2e5e27d\x2d147a\x2d4610\x2d9843\x2d328f67f9dc1d.mount: Deactivated successfully. Sep 13 00:05:34.120276 kubelet[2465]: E0913 00:05:34.118106 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.068 [INFO][4200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.068 [INFO][4200] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" iface="eth0" netns="/var/run/netns/cni-fd1e12d3-d762-ff6e-50dd-46ff26b76979" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.068 [INFO][4200] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" iface="eth0" netns="/var/run/netns/cni-fd1e12d3-d762-ff6e-50dd-46ff26b76979" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.069 [INFO][4200] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" iface="eth0" netns="/var/run/netns/cni-fd1e12d3-d762-ff6e-50dd-46ff26b76979" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.069 [INFO][4200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.069 [INFO][4200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.099 [INFO][4231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.099 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.111 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.124 [WARNING][4231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.124 [INFO][4231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.129 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:34.133141 containerd[1445]: 2025-09-13 00:05:34.131 [INFO][4200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:34.133739 containerd[1445]: time="2025-09-13T00:05:34.133419229Z" level=info msg="TearDown network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\" successfully" Sep 13 00:05:34.133739 containerd[1445]: time="2025-09-13T00:05:34.133446829Z" level=info msg="StopPodSandbox for \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\" returns successfully" Sep 13 00:05:34.136149 containerd[1445]: time="2025-09-13T00:05:34.134105276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5b759db5-cjrjx,Uid:071f06bb-6417-46e6-bfae-280329d73932,Namespace:calico-system,Attempt:1,}" Sep 13 00:05:34.135936 systemd[1]: run-netns-cni\x2dfd1e12d3\x2dd762\x2dff6e\x2d50dd\x2d46ff26b76979.mount: Deactivated successfully. Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.078 [INFO][4199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.079 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" iface="eth0" netns="/var/run/netns/cni-ebce8e47-ecb1-0cba-a914-753fc1ad94bf" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.079 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" iface="eth0" netns="/var/run/netns/cni-ebce8e47-ecb1-0cba-a914-753fc1ad94bf" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.081 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" iface="eth0" netns="/var/run/netns/cni-ebce8e47-ecb1-0cba-a914-753fc1ad94bf" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.081 [INFO][4199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.081 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.112 [INFO][4239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.113 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.129 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.139 [WARNING][4239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.139 [INFO][4239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.141 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:34.145008 containerd[1445]: 2025-09-13 00:05:34.143 [INFO][4199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:34.145465 containerd[1445]: time="2025-09-13T00:05:34.145194282Z" level=info msg="TearDown network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\" successfully" Sep 13 00:05:34.145465 containerd[1445]: time="2025-09-13T00:05:34.145218042Z" level=info msg="StopPodSandbox for \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\" returns successfully" Sep 13 00:05:34.145900 containerd[1445]: time="2025-09-13T00:05:34.145850569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-snwp2,Uid:fa29da06-fe61-4e04-b4a0-a7f57e663902,Namespace:calico-system,Attempt:1,}" Sep 13 00:05:34.147849 systemd[1]: run-netns-cni\x2debce8e47\x2decb1\x2d0cba\x2da914\x2d753fc1ad94bf.mount: Deactivated successfully. Sep 13 00:05:34.294280 systemd-networkd[1386]: calib1dfcee83e7: Link UP Sep 13 00:05:34.294490 systemd-networkd[1386]: calib1dfcee83e7: Gained carrier Sep 13 00:05:34.308964 kubelet[2465]: I0913 00:05:34.308635 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6df9688c44-grk7r" podStartSLOduration=1.945385776 podStartE2EDuration="5.308614605s" podCreationTimestamp="2025-09-13 00:05:29 +0000 UTC" firstStartedPulling="2025-09-13 00:05:29.78445412 +0000 UTC m=+34.882100643" lastFinishedPulling="2025-09-13 00:05:33.147682949 +0000 UTC m=+38.245329472" observedRunningTime="2025-09-13 00:05:34.182984948 +0000 UTC m=+39.280631471" watchObservedRunningTime="2025-09-13 00:05:34.308614605 +0000 UTC m=+39.406261128" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.216 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--snwp2-eth0 goldmane-7988f88666- calico-system fa29da06-fe61-4e04-b4a0-a7f57e663902 975 0 2025-09-13 00:05:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-snwp2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib1dfcee83e7 [] [] }} ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.216 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.249 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" HandleID="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.249 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" HandleID="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-snwp2", "timestamp":"2025-09-13 00:05:34.249619179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.249 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.249 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.249 [INFO][4295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.262 [INFO][4295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.266 [INFO][4295] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.272 [INFO][4295] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.274 [INFO][4295] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.276 [INFO][4295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.276 [INFO][4295] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.277 [INFO][4295] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.281 [INFO][4295] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.287 [INFO][4295] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.287 [INFO][4295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" host="localhost" Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.287 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:34.313625 containerd[1445]: 2025-09-13 00:05:34.287 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" HandleID="k8s-pod-network.f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.314174 containerd[1445]: 2025-09-13 00:05:34.289 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--snwp2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fa29da06-fe61-4e04-b4a0-a7f57e663902", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-snwp2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1dfcee83e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:34.314174 containerd[1445]: 2025-09-13 00:05:34.289 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.314174 containerd[1445]: 2025-09-13 00:05:34.289 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1dfcee83e7 ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.314174 containerd[1445]: 2025-09-13 00:05:34.295 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.314174 containerd[1445]: 2025-09-13 00:05:34.296 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--snwp2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fa29da06-fe61-4e04-b4a0-a7f57e663902", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec", Pod:"goldmane-7988f88666-snwp2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1dfcee83e7", MAC:"ae:52:da:53:f5:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:34.314174 containerd[1445]: 2025-09-13 00:05:34.310 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec" Namespace="calico-system" Pod="goldmane-7988f88666-snwp2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:34.330365 containerd[1445]: time="2025-09-13T00:05:34.330267849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:34.330365 containerd[1445]: time="2025-09-13T00:05:34.330360090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:34.330612 containerd[1445]: time="2025-09-13T00:05:34.330382810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:34.330612 containerd[1445]: time="2025-09-13T00:05:34.330576452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:34.354219 systemd[1]: Started cri-containerd-f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec.scope - libcontainer container f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec. Sep 13 00:05:34.364731 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:34.386361 containerd[1445]: time="2025-09-13T00:05:34.386221480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-snwp2,Uid:fa29da06-fe61-4e04-b4a0-a7f57e663902,Namespace:calico-system,Attempt:1,} returns sandbox id \"f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec\"" Sep 13 00:05:34.388555 containerd[1445]: time="2025-09-13T00:05:34.388519426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:05:34.408509 systemd-networkd[1386]: cali3a5598628df: Link UP Sep 13 00:05:34.410765 systemd-networkd[1386]: cali3a5598628df: Gained carrier Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.226 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0 coredns-7c65d6cfc9- kube-system 37010d28-d42d-4197-85d1-065db74b5133 973 0 2025-09-13 00:05:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-lmfsr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a5598628df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.226 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.257 [INFO][4302] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" HandleID="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.257 [INFO][4302] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" HandleID="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001374d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-lmfsr", "timestamp":"2025-09-13 00:05:34.257121384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.257 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.287 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.287 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.365 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.371 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.377 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.378 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.381 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.381 [INFO][4302] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.383 [INFO][4302] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423 Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.387 [INFO][4302] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.393 [INFO][4302] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.394 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" host="localhost" Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.394 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:34.427480 containerd[1445]: 2025-09-13 00:05:34.394 [INFO][4302] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" HandleID="k8s-pod-network.e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.428983 containerd[1445]: 2025-09-13 00:05:34.403 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37010d28-d42d-4197-85d1-065db74b5133", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-lmfsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a5598628df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:34.428983 containerd[1445]: 2025-09-13 00:05:34.403 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.428983 containerd[1445]: 2025-09-13 00:05:34.403 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a5598628df ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.428983 containerd[1445]: 2025-09-13 00:05:34.413 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.428983 containerd[1445]: 2025-09-13 00:05:34.413 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37010d28-d42d-4197-85d1-065db74b5133", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423", Pod:"coredns-7c65d6cfc9-lmfsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a5598628df", MAC:"b2:ef:83:27:61:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:34.428983 containerd[1445]: 2025-09-13 00:05:34.423 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lmfsr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:34.444692 containerd[1445]: time="2025-09-13T00:05:34.444603218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:34.444830 containerd[1445]: time="2025-09-13T00:05:34.444701259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:34.444830 containerd[1445]: time="2025-09-13T00:05:34.444733540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:34.444883 containerd[1445]: time="2025-09-13T00:05:34.444840381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:34.467407 systemd[1]: Started cri-containerd-e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423.scope - libcontainer container e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423. Sep 13 00:05:34.482485 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:34.507836 systemd-networkd[1386]: cali2f426cfbf15: Link UP Sep 13 00:05:34.508106 systemd-networkd[1386]: cali2f426cfbf15: Gained carrier Sep 13 00:05:34.519836 containerd[1445]: time="2025-09-13T00:05:34.519606224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lmfsr,Uid:37010d28-d42d-4197-85d1-065db74b5133,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423\"" Sep 13 00:05:34.521771 kubelet[2465]: E0913 00:05:34.521741 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:34.526134 containerd[1445]: time="2025-09-13T00:05:34.526092217Z" level=info msg="CreateContainer within sandbox \"e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.230 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0 calico-kube-controllers-5f5b759db5- calico-system 071f06bb-6417-46e6-bfae-280329d73932 974 0 2025-09-13 00:05:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f5b759db5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f5b759db5-cjrjx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2f426cfbf15 [] [] }} ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.230 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.263 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" HandleID="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.263 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" HandleID="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005baaa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f5b759db5-cjrjx", "timestamp":"2025-09-13 00:05:34.263100771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.263 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.394 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.394 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.463 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.474 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.479 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.481 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.488 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.488 [INFO][4308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.490 [INFO][4308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4 Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.494 [INFO][4308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.500 [INFO][4308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.500 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" host="localhost" Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.500 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:34.526988 containerd[1445]: 2025-09-13 00:05:34.500 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" HandleID="k8s-pod-network.bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.527474 containerd[1445]: 2025-09-13 00:05:34.503 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0", GenerateName:"calico-kube-controllers-5f5b759db5-", Namespace:"calico-system", SelfLink:"", UID:"071f06bb-6417-46e6-bfae-280329d73932", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5b759db5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f5b759db5-cjrjx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f426cfbf15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:34.527474 containerd[1445]: 2025-09-13 00:05:34.503 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.527474 containerd[1445]: 2025-09-13 00:05:34.503 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f426cfbf15 ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.527474 containerd[1445]: 2025-09-13 00:05:34.507 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.527474 containerd[1445]: 2025-09-13 00:05:34.507 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0", GenerateName:"calico-kube-controllers-5f5b759db5-", Namespace:"calico-system", SelfLink:"", UID:"071f06bb-6417-46e6-bfae-280329d73932", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5b759db5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4", Pod:"calico-kube-controllers-5f5b759db5-cjrjx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f426cfbf15", MAC:"5e:39:45:a0:7e:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:34.527474 containerd[1445]: 2025-09-13 00:05:34.520 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4" Namespace="calico-system" Pod="calico-kube-controllers-5f5b759db5-cjrjx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:34.546504 containerd[1445]: time="2025-09-13T00:05:34.546314726Z" level=info msg="CreateContainer within sandbox \"e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"387eef4a5ae9150abb7c1839d5d04a1e312b20326b5ca016c22eefabedeeb208\"" Sep 13 00:05:34.547391 containerd[1445]: time="2025-09-13T00:05:34.547362777Z" level=info msg="StartContainer for \"387eef4a5ae9150abb7c1839d5d04a1e312b20326b5ca016c22eefabedeeb208\"" Sep 13 00:05:34.568813 containerd[1445]: time="2025-09-13T00:05:34.568713338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:34.568813 containerd[1445]: time="2025-09-13T00:05:34.568790219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:34.568813 containerd[1445]: time="2025-09-13T00:05:34.568807739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:34.569105 containerd[1445]: time="2025-09-13T00:05:34.568901420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:34.572138 systemd[1]: Started cri-containerd-387eef4a5ae9150abb7c1839d5d04a1e312b20326b5ca016c22eefabedeeb208.scope - libcontainer container 387eef4a5ae9150abb7c1839d5d04a1e312b20326b5ca016c22eefabedeeb208. Sep 13 00:05:34.589269 systemd[1]: Started cri-containerd-bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4.scope - libcontainer container bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4. Sep 13 00:05:34.602997 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:34.610868 containerd[1445]: time="2025-09-13T00:05:34.610823933Z" level=info msg="StartContainer for \"387eef4a5ae9150abb7c1839d5d04a1e312b20326b5ca016c22eefabedeeb208\" returns successfully" Sep 13 00:05:34.622665 containerd[1445]: time="2025-09-13T00:05:34.622621826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5b759db5-cjrjx,Uid:071f06bb-6417-46e6-bfae-280329d73932,Namespace:calico-system,Attempt:1,} returns sandbox id \"bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4\"" Sep 13 00:05:34.992061 containerd[1445]: time="2025-09-13T00:05:34.990010810Z" level=info msg="StopPodSandbox for \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\"" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.038 [INFO][4526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.038 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" iface="eth0" netns="/var/run/netns/cni-c73721c0-6a73-c4aa-e5f6-4cdbfce71d93" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.038 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" iface="eth0" netns="/var/run/netns/cni-c73721c0-6a73-c4aa-e5f6-4cdbfce71d93" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.039 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" iface="eth0" netns="/var/run/netns/cni-c73721c0-6a73-c4aa-e5f6-4cdbfce71d93" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.039 [INFO][4526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.039 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.059 [INFO][4540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.060 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.060 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.068 [WARNING][4540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.068 [INFO][4540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.070 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:35.073929 containerd[1445]: 2025-09-13 00:05:35.072 [INFO][4526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:35.075272 containerd[1445]: time="2025-09-13T00:05:35.074071575Z" level=info msg="TearDown network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\" successfully" Sep 13 00:05:35.075272 containerd[1445]: time="2025-09-13T00:05:35.074095975Z" level=info msg="StopPodSandbox for \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\" returns successfully" Sep 13 00:05:35.076699 containerd[1445]: time="2025-09-13T00:05:35.076669244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5t74,Uid:7d15f87e-d4b3-4e20-9451-06b0fba27ad4,Namespace:calico-system,Attempt:1,}" Sep 13 00:05:35.077513 systemd[1]: run-netns-cni\x2dc73721c0\x2d6a73\x2dc4aa\x2de5f6\x2d4cdbfce71d93.mount: Deactivated successfully. Sep 13 00:05:35.174067 kubelet[2465]: E0913 00:05:35.174018 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:35.189974 kubelet[2465]: I0913 00:05:35.189913 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lmfsr" podStartSLOduration=34.189894566 podStartE2EDuration="34.189894566s" podCreationTimestamp="2025-09-13 00:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:35.189787404 +0000 UTC m=+40.287433927" watchObservedRunningTime="2025-09-13 00:05:35.189894566 +0000 UTC m=+40.287541089" Sep 13 00:05:35.204921 systemd-networkd[1386]: cali9a9887bd885: Link UP Sep 13 00:05:35.205101 systemd-networkd[1386]: cali9a9887bd885: Gained carrier Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.121 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c5t74-eth0 csi-node-driver- calico-system 7d15f87e-d4b3-4e20-9451-06b0fba27ad4 1000 0 2025-09-13 00:05:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c5t74 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9a9887bd885 [] [] }} ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.121 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.153 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" HandleID="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.153 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" HandleID="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c5t74", "timestamp":"2025-09-13 00:05:35.153345365 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.153 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.153 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.153 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.166 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.170 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.176 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.178 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.181 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.182 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.183 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61 Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.189 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.197 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.197 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" host="localhost" Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.197 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:35.229416 containerd[1445]: 2025-09-13 00:05:35.197 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" HandleID="k8s-pod-network.97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.230642 containerd[1445]: 2025-09-13 00:05:35.201 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5t74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d15f87e-d4b3-4e20-9451-06b0fba27ad4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c5t74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a9887bd885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:35.230642 containerd[1445]: 2025-09-13 00:05:35.201 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.230642 containerd[1445]: 2025-09-13 00:05:35.201 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a9887bd885 ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.230642 containerd[1445]: 2025-09-13 00:05:35.204 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.230642 containerd[1445]: 2025-09-13 00:05:35.206 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5t74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d15f87e-d4b3-4e20-9451-06b0fba27ad4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61", Pod:"csi-node-driver-c5t74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a9887bd885", MAC:"22:de:dd:5c:7c:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:35.230642 containerd[1445]: 2025-09-13 00:05:35.224 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61" Namespace="calico-system" Pod="csi-node-driver-c5t74" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:35.247913 containerd[1445]: time="2025-09-13T00:05:35.247703000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:35.247913 containerd[1445]: time="2025-09-13T00:05:35.247766120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:35.247913 containerd[1445]: time="2025-09-13T00:05:35.247781480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:35.247913 containerd[1445]: time="2025-09-13T00:05:35.247873441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:35.271262 systemd[1]: Started cri-containerd-97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61.scope - libcontainer container 97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61. Sep 13 00:05:35.281567 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:35.292423 containerd[1445]: time="2025-09-13T00:05:35.292367050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5t74,Uid:7d15f87e-d4b3-4e20-9451-06b0fba27ad4,Namespace:calico-system,Attempt:1,} returns sandbox id \"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61\"" Sep 13 00:05:35.634210 systemd-networkd[1386]: cali3a5598628df: Gained IPv6LL Sep 13 00:05:35.826249 systemd-networkd[1386]: calib1dfcee83e7: Gained IPv6LL Sep 13 00:05:35.948088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529074406.mount: Deactivated successfully. Sep 13 00:05:35.989878 containerd[1445]: time="2025-09-13T00:05:35.989830380Z" level=info msg="StopPodSandbox for \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\"" Sep 13 00:05:35.989878 containerd[1445]: time="2025-09-13T00:05:35.989877340Z" level=info msg="StopPodSandbox for \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\"" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.045 [INFO][4657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.046 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" iface="eth0" netns="/var/run/netns/cni-8fb292a8-1cbe-a4be-4f75-460c972511b1" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.046 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" iface="eth0" netns="/var/run/netns/cni-8fb292a8-1cbe-a4be-4f75-460c972511b1" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.047 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" iface="eth0" netns="/var/run/netns/cni-8fb292a8-1cbe-a4be-4f75-460c972511b1" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.047 [INFO][4657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.047 [INFO][4657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.080 [INFO][4673] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.081 [INFO][4673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.081 [INFO][4673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.090 [WARNING][4673] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.090 [INFO][4673] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.092 [INFO][4673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:36.097379 containerd[1445]: 2025-09-13 00:05:36.094 [INFO][4657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:36.098494 containerd[1445]: time="2025-09-13T00:05:36.098457183Z" level=info msg="TearDown network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\" successfully" Sep 13 00:05:36.098494 containerd[1445]: time="2025-09-13T00:05:36.098496743Z" level=info msg="StopPodSandbox for \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\" returns successfully" Sep 13 00:05:36.100570 kubelet[2465]: E0913 00:05:36.100520 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:36.101149 containerd[1445]: time="2025-09-13T00:05:36.101099611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-42f6q,Uid:af8d6313-8016-4b63-b286-8fb59033218e,Namespace:kube-system,Attempt:1,}" Sep 13 00:05:36.101939 systemd[1]: run-netns-cni\x2d8fb292a8\x2d1cbe\x2da4be\x2d4f75\x2d460c972511b1.mount: Deactivated successfully. Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.051 [INFO][4658] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.051 [INFO][4658] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" iface="eth0" netns="/var/run/netns/cni-a94531b7-f3a1-e569-7282-93e24061fcbb" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.052 [INFO][4658] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" iface="eth0" netns="/var/run/netns/cni-a94531b7-f3a1-e569-7282-93e24061fcbb" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.052 [INFO][4658] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" iface="eth0" netns="/var/run/netns/cni-a94531b7-f3a1-e569-7282-93e24061fcbb" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.052 [INFO][4658] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.052 [INFO][4658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.080 [INFO][4679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.081 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.092 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.106 [WARNING][4679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.106 [INFO][4679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.110 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:36.120298 containerd[1445]: 2025-09-13 00:05:36.114 [INFO][4658] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:36.121743 containerd[1445]: time="2025-09-13T00:05:36.121253586Z" level=info msg="TearDown network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\" successfully" Sep 13 00:05:36.121743 containerd[1445]: time="2025-09-13T00:05:36.121440628Z" level=info msg="StopPodSandbox for \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\" returns successfully" Sep 13 00:05:36.122683 containerd[1445]: time="2025-09-13T00:05:36.122395079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8jkvx,Uid:7691cf75-c4db-4e69-bd59-9bd4189e2702,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:05:36.148780 systemd-networkd[1386]: cali2f426cfbf15: Gained IPv6LL Sep 13 00:05:36.188250 kubelet[2465]: E0913 00:05:36.188099 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:36.251679 systemd-networkd[1386]: cali287d90e9f45: Link UP Sep 13 00:05:36.252612 systemd-networkd[1386]: cali287d90e9f45: Gained carrier Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.162 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0 coredns-7c65d6cfc9- kube-system af8d6313-8016-4b63-b286-8fb59033218e 1018 0 2025-09-13 00:05:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-42f6q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali287d90e9f45 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.162 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.205 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" HandleID="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.205 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" HandleID="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137900), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-42f6q", "timestamp":"2025-09-13 00:05:36.205742929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.205 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.206 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.206 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.216 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.221 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.225 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.227 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.231 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.231 [INFO][4718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.233 [INFO][4718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8 Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.239 [INFO][4718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.245 [INFO][4718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.246 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" host="localhost" Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.246 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:36.272280 containerd[1445]: 2025-09-13 00:05:36.246 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" HandleID="k8s-pod-network.2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.273457 containerd[1445]: 2025-09-13 00:05:36.248 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af8d6313-8016-4b63-b286-8fb59033218e", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-42f6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287d90e9f45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:36.273457 containerd[1445]: 2025-09-13 00:05:36.248 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.273457 containerd[1445]: 2025-09-13 00:05:36.248 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali287d90e9f45 ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.273457 containerd[1445]: 2025-09-13 00:05:36.253 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.273457 containerd[1445]: 2025-09-13 00:05:36.254 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af8d6313-8016-4b63-b286-8fb59033218e", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8", Pod:"coredns-7c65d6cfc9-42f6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287d90e9f45", MAC:"ce:66:29:47:c9:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:36.273457 containerd[1445]: 2025-09-13 00:05:36.267 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-42f6q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:36.293634 containerd[1445]: time="2025-09-13T00:05:36.293260663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:36.293634 containerd[1445]: time="2025-09-13T00:05:36.293332864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:36.294237 containerd[1445]: time="2025-09-13T00:05:36.294122792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:36.295253 containerd[1445]: time="2025-09-13T00:05:36.295207884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:36.320516 systemd[1]: Started cri-containerd-2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8.scope - libcontainer container 2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8. Sep 13 00:05:36.338670 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:36.364932 systemd-networkd[1386]: calia6dcff6e4d5: Link UP Sep 13 00:05:36.365649 systemd-networkd[1386]: calia6dcff6e4d5: Gained carrier Sep 13 00:05:36.378113 containerd[1445]: time="2025-09-13T00:05:36.378070009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-42f6q,Uid:af8d6313-8016-4b63-b286-8fb59033218e,Namespace:kube-system,Attempt:1,} returns sandbox id \"2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8\"" Sep 13 00:05:36.379660 kubelet[2465]: E0913 00:05:36.379346 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:36.383608 containerd[1445]: time="2025-09-13T00:05:36.383457826Z" level=info msg="CreateContainer within sandbox \"2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.178 [INFO][4701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0 calico-apiserver-5c55569db5- calico-apiserver 7691cf75-c4db-4e69-bd59-9bd4189e2702 1019 0 2025-09-13 00:05:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c55569db5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c55569db5-8jkvx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia6dcff6e4d5 [] [] }} ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.178 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.215 [INFO][4724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" HandleID="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.215 [INFO][4724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" HandleID="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c55569db5-8jkvx", "timestamp":"2025-09-13 00:05:36.215152589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.215 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.246 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.246 [INFO][4724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.317 [INFO][4724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.324 [INFO][4724] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.333 [INFO][4724] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.336 [INFO][4724] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.340 [INFO][4724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.340 [INFO][4724] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.343 [INFO][4724] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.349 [INFO][4724] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.358 [INFO][4724] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.358 [INFO][4724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" host="localhost" Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.358 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:36.390959 containerd[1445]: 2025-09-13 00:05:36.358 [INFO][4724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" HandleID="k8s-pod-network.4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.391514 containerd[1445]: 2025-09-13 00:05:36.362 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7691cf75-c4db-4e69-bd59-9bd4189e2702", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c55569db5-8jkvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6dcff6e4d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:36.391514 containerd[1445]: 2025-09-13 00:05:36.362 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.391514 containerd[1445]: 2025-09-13 00:05:36.362 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6dcff6e4d5 ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.391514 containerd[1445]: 2025-09-13 00:05:36.366 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.391514 containerd[1445]: 2025-09-13 00:05:36.366 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7691cf75-c4db-4e69-bd59-9bd4189e2702", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d", Pod:"calico-apiserver-5c55569db5-8jkvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6dcff6e4d5", MAC:"f6:c7:7d:88:ee:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:36.391514 containerd[1445]: 2025-09-13 00:05:36.382 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8jkvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:36.409881 containerd[1445]: time="2025-09-13T00:05:36.408260811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:36.409881 containerd[1445]: time="2025-09-13T00:05:36.409486304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:36.409881 containerd[1445]: time="2025-09-13T00:05:36.409503904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:36.410214 containerd[1445]: time="2025-09-13T00:05:36.409600185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:36.427227 systemd[1]: Started cri-containerd-4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d.scope - libcontainer container 4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d. Sep 13 00:05:36.427599 containerd[1445]: time="2025-09-13T00:05:36.427307495Z" level=info msg="CreateContainer within sandbox \"2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c57bc5f7272dedfccef118b0b4fdc421fc9558164a1c97451f169aeca8d09d0\"" Sep 13 00:05:36.428268 containerd[1445]: time="2025-09-13T00:05:36.428228504Z" level=info msg="StartContainer for \"0c57bc5f7272dedfccef118b0b4fdc421fc9558164a1c97451f169aeca8d09d0\"" Sep 13 00:05:36.445857 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:36.463288 systemd[1]: Started cri-containerd-0c57bc5f7272dedfccef118b0b4fdc421fc9558164a1c97451f169aeca8d09d0.scope - libcontainer container 0c57bc5f7272dedfccef118b0b4fdc421fc9558164a1c97451f169aeca8d09d0. Sep 13 00:05:36.485257 containerd[1445]: time="2025-09-13T00:05:36.485181313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8jkvx,Uid:7691cf75-c4db-4e69-bd59-9bd4189e2702,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d\"" Sep 13 00:05:36.604028 containerd[1445]: time="2025-09-13T00:05:36.603873580Z" level=info msg="StartContainer for \"0c57bc5f7272dedfccef118b0b4fdc421fc9558164a1c97451f169aeca8d09d0\" returns successfully" Sep 13 00:05:36.633295 containerd[1445]: time="2025-09-13T00:05:36.633222373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:36.635219 containerd[1445]: time="2025-09-13T00:05:36.635074073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:05:36.636273 containerd[1445]: time="2025-09-13T00:05:36.636222165Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:36.642515 containerd[1445]: time="2025-09-13T00:05:36.641917746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:36.643407 containerd[1445]: time="2025-09-13T00:05:36.643368082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.254799176s" Sep 13 00:05:36.643454 containerd[1445]: time="2025-09-13T00:05:36.643410722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:05:36.646461 containerd[1445]: time="2025-09-13T00:05:36.646421794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:05:36.647624 containerd[1445]: time="2025-09-13T00:05:36.647511446Z" level=info msg="CreateContainer within sandbox \"f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:05:36.659151 systemd-networkd[1386]: cali9a9887bd885: Gained IPv6LL Sep 13 00:05:36.662904 containerd[1445]: time="2025-09-13T00:05:36.662863210Z" level=info msg="CreateContainer within sandbox \"f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7edfd374ca08a06031d877d5cd4a8f77b0dc6ed05f1e479355d2bdea7e199492\"" Sep 13 00:05:36.664156 containerd[1445]: time="2025-09-13T00:05:36.663717139Z" level=info msg="StartContainer for \"7edfd374ca08a06031d877d5cd4a8f77b0dc6ed05f1e479355d2bdea7e199492\"" Sep 13 00:05:36.699101 systemd[1]: Started cri-containerd-7edfd374ca08a06031d877d5cd4a8f77b0dc6ed05f1e479355d2bdea7e199492.scope - libcontainer container 7edfd374ca08a06031d877d5cd4a8f77b0dc6ed05f1e479355d2bdea7e199492. Sep 13 00:05:36.733810 containerd[1445]: time="2025-09-13T00:05:36.733769247Z" level=info msg="StartContainer for \"7edfd374ca08a06031d877d5cd4a8f77b0dc6ed05f1e479355d2bdea7e199492\" returns successfully" Sep 13 00:05:36.745604 systemd[1]: run-netns-cni\x2da94531b7\x2df3a1\x2de569\x2d7282\x2d93e24061fcbb.mount: Deactivated successfully. Sep 13 00:05:37.191334 kubelet[2465]: E0913 00:05:37.191243 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:37.198308 kubelet[2465]: E0913 00:05:37.198220 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:37.206134 kubelet[2465]: I0913 00:05:37.205891 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-42f6q" podStartSLOduration=36.205874313 podStartE2EDuration="36.205874313s" podCreationTimestamp="2025-09-13 00:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:37.204119374 +0000 UTC m=+42.301765897" watchObservedRunningTime="2025-09-13 00:05:37.205874313 +0000 UTC m=+42.303520796" Sep 13 00:05:37.216599 kubelet[2465]: I0913 00:05:37.216430 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-snwp2" podStartSLOduration=19.960149911 podStartE2EDuration="22.216413062s" podCreationTimestamp="2025-09-13 00:05:15 +0000 UTC" firstStartedPulling="2025-09-13 00:05:34.388293743 +0000 UTC m=+39.485940266" lastFinishedPulling="2025-09-13 00:05:36.644556894 +0000 UTC m=+41.742203417" observedRunningTime="2025-09-13 00:05:37.216315941 +0000 UTC m=+42.313962464" watchObservedRunningTime="2025-09-13 00:05:37.216413062 +0000 UTC m=+42.314059545" Sep 13 00:05:37.426261 systemd-networkd[1386]: cali287d90e9f45: Gained IPv6LL Sep 13 00:05:37.989159 containerd[1445]: time="2025-09-13T00:05:37.989106703Z" level=info msg="StopPodSandbox for \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\"" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.031 [INFO][4952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.031 [INFO][4952] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" iface="eth0" netns="/var/run/netns/cni-0d3bddb8-d382-171f-0e41-52419f407e92" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.031 [INFO][4952] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" iface="eth0" netns="/var/run/netns/cni-0d3bddb8-d382-171f-0e41-52419f407e92" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.031 [INFO][4952] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" iface="eth0" netns="/var/run/netns/cni-0d3bddb8-d382-171f-0e41-52419f407e92" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.031 [INFO][4952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.031 [INFO][4952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.054 [INFO][4961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.055 [INFO][4961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.055 [INFO][4961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.063 [WARNING][4961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.063 [INFO][4961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.064 [INFO][4961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:38.068586 containerd[1445]: 2025-09-13 00:05:38.066 [INFO][4952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:38.069023 containerd[1445]: time="2025-09-13T00:05:38.068737435Z" level=info msg="TearDown network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\" successfully" Sep 13 00:05:38.069023 containerd[1445]: time="2025-09-13T00:05:38.068764995Z" level=info msg="StopPodSandbox for \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\" returns successfully" Sep 13 00:05:38.069790 containerd[1445]: time="2025-09-13T00:05:38.069409481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8wwmc,Uid:f5742ff1-d3ec-4bea-89be-500213d11650,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:05:38.073521 systemd[1]: run-netns-cni\x2d0d3bddb8\x2dd382\x2d171f\x2d0e41\x2d52419f407e92.mount: Deactivated successfully. Sep 13 00:05:38.199632 systemd-networkd[1386]: cali66354e57ba9: Link UP Sep 13 00:05:38.199775 systemd-networkd[1386]: cali66354e57ba9: Gained carrier Sep 13 00:05:38.202378 kubelet[2465]: E0913 00:05:38.201806 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.118 [INFO][4969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0 calico-apiserver-5c55569db5- calico-apiserver f5742ff1-d3ec-4bea-89be-500213d11650 1057 0 2025-09-13 00:05:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c55569db5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c55569db5-8wwmc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali66354e57ba9 [] [] }} ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.118 [INFO][4969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.150 [INFO][4983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" HandleID="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.150 [INFO][4983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" HandleID="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c55569db5-8wwmc", "timestamp":"2025-09-13 00:05:38.150449144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.150 [INFO][4983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.150 [INFO][4983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.150 [INFO][4983] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.162 [INFO][4983] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.170 [INFO][4983] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.177 [INFO][4983] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.179 [INFO][4983] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.181 [INFO][4983] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.181 [INFO][4983] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.183 [INFO][4983] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.186 [INFO][4983] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.192 [INFO][4983] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.192 [INFO][4983] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" host="localhost" Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.192 [INFO][4983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:38.219347 containerd[1445]: 2025-09-13 00:05:38.192 [INFO][4983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" HandleID="k8s-pod-network.e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.219945 containerd[1445]: 2025-09-13 00:05:38.197 [INFO][4969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5742ff1-d3ec-4bea-89be-500213d11650", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c55569db5-8wwmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66354e57ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:38.219945 containerd[1445]: 2025-09-13 00:05:38.197 [INFO][4969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.219945 containerd[1445]: 2025-09-13 00:05:38.197 [INFO][4969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66354e57ba9 ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.219945 containerd[1445]: 2025-09-13 00:05:38.199 [INFO][4969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.219945 containerd[1445]: 2025-09-13 00:05:38.200 [INFO][4969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5742ff1-d3ec-4bea-89be-500213d11650", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f", Pod:"calico-apiserver-5c55569db5-8wwmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66354e57ba9", MAC:"3a:f6:d3:74:a6:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:38.219945 containerd[1445]: 2025-09-13 00:05:38.214 [INFO][4969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f" Namespace="calico-apiserver" Pod="calico-apiserver-5c55569db5-8wwmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:38.233584 systemd[1]: run-containerd-runc-k8s.io-7edfd374ca08a06031d877d5cd4a8f77b0dc6ed05f1e479355d2bdea7e199492-runc.B03QUK.mount: Deactivated successfully. Sep 13 00:05:38.239073 containerd[1445]: time="2025-09-13T00:05:38.238372357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:38.239073 containerd[1445]: time="2025-09-13T00:05:38.238437357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:38.239073 containerd[1445]: time="2025-09-13T00:05:38.238464117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:38.239073 containerd[1445]: time="2025-09-13T00:05:38.238570759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:38.264280 systemd[1]: Started cri-containerd-e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f.scope - libcontainer container e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f. Sep 13 00:05:38.282345 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:05:38.306834 containerd[1445]: time="2025-09-13T00:05:38.306477808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55569db5-8wwmc,Uid:f5742ff1-d3ec-4bea-89be-500213d11650,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f\"" Sep 13 00:05:38.386261 systemd-networkd[1386]: calia6dcff6e4d5: Gained IPv6LL Sep 13 00:05:39.078925 containerd[1445]: time="2025-09-13T00:05:39.078869910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:39.079444 containerd[1445]: time="2025-09-13T00:05:39.079410995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:05:39.080334 containerd[1445]: time="2025-09-13T00:05:39.080301564Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:39.082621 containerd[1445]: time="2025-09-13T00:05:39.082591427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:39.083409 containerd[1445]: time="2025-09-13T00:05:39.083296634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.436829639s" Sep 13 00:05:39.083409 containerd[1445]: time="2025-09-13T00:05:39.083331674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:05:39.084206 containerd[1445]: time="2025-09-13T00:05:39.084182042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:05:39.091074 containerd[1445]: time="2025-09-13T00:05:39.090472345Z" level=info msg="CreateContainer within sandbox \"bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:05:39.100932 containerd[1445]: time="2025-09-13T00:05:39.100892088Z" level=info msg="CreateContainer within sandbox \"bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"517e2694701bf7cb7c1f1050893dca9f7a4bddab7635d99d14cda4b8d6a55fee\"" Sep 13 00:05:39.103367 containerd[1445]: time="2025-09-13T00:05:39.103333672Z" level=info msg="StartContainer for \"517e2694701bf7cb7c1f1050893dca9f7a4bddab7635d99d14cda4b8d6a55fee\"" Sep 13 00:05:39.133263 systemd[1]: Started cri-containerd-517e2694701bf7cb7c1f1050893dca9f7a4bddab7635d99d14cda4b8d6a55fee.scope - libcontainer container 517e2694701bf7cb7c1f1050893dca9f7a4bddab7635d99d14cda4b8d6a55fee. Sep 13 00:05:39.164381 containerd[1445]: time="2025-09-13T00:05:39.164310317Z" level=info msg="StartContainer for \"517e2694701bf7cb7c1f1050893dca9f7a4bddab7635d99d14cda4b8d6a55fee\" returns successfully" Sep 13 00:05:39.208439 kubelet[2465]: E0913 00:05:39.208395 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:05:39.229506 kubelet[2465]: I0913 00:05:39.229445 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f5b759db5-cjrjx" podStartSLOduration=19.769919049 podStartE2EDuration="24.229425082s" podCreationTimestamp="2025-09-13 00:05:15 +0000 UTC" firstStartedPulling="2025-09-13 00:05:34.624521888 +0000 UTC m=+39.722168411" lastFinishedPulling="2025-09-13 00:05:39.084027921 +0000 UTC m=+44.181674444" observedRunningTime="2025-09-13 00:05:39.228609714 +0000 UTC m=+44.326256237" watchObservedRunningTime="2025-09-13 00:05:39.229425082 +0000 UTC m=+44.327071565" Sep 13 00:05:39.410165 systemd-networkd[1386]: cali66354e57ba9: Gained IPv6LL Sep 13 00:05:40.377435 containerd[1445]: time="2025-09-13T00:05:40.377339176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:40.380457 containerd[1445]: time="2025-09-13T00:05:40.378622348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:05:40.380457 containerd[1445]: time="2025-09-13T00:05:40.379591917Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:40.382292 containerd[1445]: time="2025-09-13T00:05:40.382241903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:40.383400 containerd[1445]: time="2025-09-13T00:05:40.382897709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.298681266s" Sep 13 00:05:40.383400 containerd[1445]: time="2025-09-13T00:05:40.382945390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:05:40.384094 containerd[1445]: time="2025-09-13T00:05:40.383894359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:05:40.385161 containerd[1445]: time="2025-09-13T00:05:40.385062970Z" level=info msg="CreateContainer within sandbox \"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:05:40.408380 containerd[1445]: time="2025-09-13T00:05:40.408331876Z" level=info msg="CreateContainer within sandbox \"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bd1c2e41cc80467f2f8607e0e6769345084467af95b50f180ecdc8895c3b2daf\"" Sep 13 00:05:40.408983 containerd[1445]: time="2025-09-13T00:05:40.408812081Z" level=info msg="StartContainer for \"bd1c2e41cc80467f2f8607e0e6769345084467af95b50f180ecdc8895c3b2daf\"" Sep 13 00:05:40.445216 systemd[1]: Started cri-containerd-bd1c2e41cc80467f2f8607e0e6769345084467af95b50f180ecdc8895c3b2daf.scope - libcontainer container bd1c2e41cc80467f2f8607e0e6769345084467af95b50f180ecdc8895c3b2daf. Sep 13 00:05:40.479630 containerd[1445]: time="2025-09-13T00:05:40.479588326Z" level=info msg="StartContainer for \"bd1c2e41cc80467f2f8607e0e6769345084467af95b50f180ecdc8895c3b2daf\" returns successfully" Sep 13 00:05:42.587245 containerd[1445]: time="2025-09-13T00:05:42.586469688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.587245 containerd[1445]: time="2025-09-13T00:05:42.587007653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:05:42.587838 containerd[1445]: time="2025-09-13T00:05:42.587813141Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.590089 containerd[1445]: time="2025-09-13T00:05:42.590036961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.591127 containerd[1445]: time="2025-09-13T00:05:42.591098371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.207171652s" Sep 13 00:05:42.591200 containerd[1445]: time="2025-09-13T00:05:42.591129771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:05:42.592504 containerd[1445]: time="2025-09-13T00:05:42.592346303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:05:42.593578 containerd[1445]: time="2025-09-13T00:05:42.593545634Z" level=info msg="CreateContainer within sandbox \"4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:05:42.607234 containerd[1445]: time="2025-09-13T00:05:42.607133480Z" level=info msg="CreateContainer within sandbox \"4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0427152d2f1702ad0eecfb7ee2bd70b581938bdc7d45962dd904618348860cf8\"" Sep 13 00:05:42.607778 containerd[1445]: time="2025-09-13T00:05:42.607723445Z" level=info msg="StartContainer for \"0427152d2f1702ad0eecfb7ee2bd70b581938bdc7d45962dd904618348860cf8\"" Sep 13 00:05:42.652131 systemd[1]: Started cri-containerd-0427152d2f1702ad0eecfb7ee2bd70b581938bdc7d45962dd904618348860cf8.scope - libcontainer container 0427152d2f1702ad0eecfb7ee2bd70b581938bdc7d45962dd904618348860cf8. Sep 13 00:05:42.662202 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:38016.service - OpenSSH per-connection server daemon (10.0.0.1:38016). Sep 13 00:05:42.760504 containerd[1445]: time="2025-09-13T00:05:42.760231821Z" level=info msg="StartContainer for \"0427152d2f1702ad0eecfb7ee2bd70b581938bdc7d45962dd904618348860cf8\" returns successfully" Sep 13 00:05:42.791060 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 38016 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:42.793748 sshd[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:42.798192 systemd-logind[1428]: New session 8 of user core. Sep 13 00:05:42.811267 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:05:42.818254 containerd[1445]: time="2025-09-13T00:05:42.818217759Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.818805 containerd[1445]: time="2025-09-13T00:05:42.818775644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:05:42.822687 containerd[1445]: time="2025-09-13T00:05:42.822500198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 230.121375ms" Sep 13 00:05:42.822687 containerd[1445]: time="2025-09-13T00:05:42.822553599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:05:42.823457 containerd[1445]: time="2025-09-13T00:05:42.823336526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:05:42.825591 containerd[1445]: time="2025-09-13T00:05:42.825506546Z" level=info msg="CreateContainer within sandbox \"e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:05:42.838255 containerd[1445]: time="2025-09-13T00:05:42.838111503Z" level=info msg="CreateContainer within sandbox \"e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d51549fee749614ea470c1a92d9c5622cd0ae8ad2aa901451960750449b47f8\"" Sep 13 00:05:42.839318 containerd[1445]: time="2025-09-13T00:05:42.839293714Z" level=info msg="StartContainer for \"5d51549fee749614ea470c1a92d9c5622cd0ae8ad2aa901451960750449b47f8\"" Sep 13 00:05:42.875310 systemd[1]: Started cri-containerd-5d51549fee749614ea470c1a92d9c5622cd0ae8ad2aa901451960750449b47f8.scope - libcontainer container 5d51549fee749614ea470c1a92d9c5622cd0ae8ad2aa901451960750449b47f8. Sep 13 00:05:42.927641 containerd[1445]: time="2025-09-13T00:05:42.927599134Z" level=info msg="StartContainer for \"5d51549fee749614ea470c1a92d9c5622cd0ae8ad2aa901451960750449b47f8\" returns successfully" Sep 13 00:05:43.106707 sshd[5218]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:43.110385 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:38016.service: Deactivated successfully. Sep 13 00:05:43.113814 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:05:43.114636 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:05:43.115912 systemd-logind[1428]: Removed session 8. Sep 13 00:05:43.256429 kubelet[2465]: I0913 00:05:43.255878 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c55569db5-8jkvx" podStartSLOduration=27.15218318 podStartE2EDuration="33.255860533s" podCreationTimestamp="2025-09-13 00:05:10 +0000 UTC" firstStartedPulling="2025-09-13 00:05:36.488111984 +0000 UTC m=+41.585758507" lastFinishedPulling="2025-09-13 00:05:42.591789337 +0000 UTC m=+47.689435860" observedRunningTime="2025-09-13 00:05:43.2544492 +0000 UTC m=+48.352095723" watchObservedRunningTime="2025-09-13 00:05:43.255860533 +0000 UTC m=+48.353507056" Sep 13 00:05:44.027798 kubelet[2465]: I0913 00:05:44.027532 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:05:44.189300 kubelet[2465]: I0913 00:05:44.188018 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c55569db5-8wwmc" podStartSLOduration=29.673447364 podStartE2EDuration="34.188000939s" podCreationTimestamp="2025-09-13 00:05:10 +0000 UTC" firstStartedPulling="2025-09-13 00:05:38.30868083 +0000 UTC m=+43.406327353" lastFinishedPulling="2025-09-13 00:05:42.823234405 +0000 UTC m=+47.920880928" observedRunningTime="2025-09-13 00:05:43.279971552 +0000 UTC m=+48.377618075" watchObservedRunningTime="2025-09-13 00:05:44.188000939 +0000 UTC m=+49.285647462" Sep 13 00:05:44.246494 kubelet[2465]: I0913 00:05:44.246439 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:05:44.337780 containerd[1445]: time="2025-09-13T00:05:44.337026708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:44.340159 containerd[1445]: time="2025-09-13T00:05:44.339928174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:05:44.342400 containerd[1445]: time="2025-09-13T00:05:44.341154585Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:44.344470 containerd[1445]: time="2025-09-13T00:05:44.344165612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:44.345146 containerd[1445]: time="2025-09-13T00:05:44.345116461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.521752455s" Sep 13 00:05:44.345316 containerd[1445]: time="2025-09-13T00:05:44.345228822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:05:44.348215 containerd[1445]: time="2025-09-13T00:05:44.348178888Z" level=info msg="CreateContainer within sandbox \"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:05:44.388945 containerd[1445]: time="2025-09-13T00:05:44.388826931Z" level=info msg="CreateContainer within sandbox \"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ff28e995fa9c01d767df0f8e99ab414175292d540e045ab1e543a61642e2aad8\"" Sep 13 00:05:44.389801 containerd[1445]: time="2025-09-13T00:05:44.389729379Z" level=info msg="StartContainer for \"ff28e995fa9c01d767df0f8e99ab414175292d540e045ab1e543a61642e2aad8\"" Sep 13 00:05:44.435200 systemd[1]: Started cri-containerd-ff28e995fa9c01d767df0f8e99ab414175292d540e045ab1e543a61642e2aad8.scope - libcontainer container ff28e995fa9c01d767df0f8e99ab414175292d540e045ab1e543a61642e2aad8. Sep 13 00:05:44.462896 containerd[1445]: time="2025-09-13T00:05:44.462848351Z" level=info msg="StartContainer for \"ff28e995fa9c01d767df0f8e99ab414175292d540e045ab1e543a61642e2aad8\" returns successfully" Sep 13 00:05:45.080688 kubelet[2465]: I0913 00:05:45.080615 2465 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:05:45.081343 kubelet[2465]: I0913 00:05:45.081316 2465 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:05:45.273671 kubelet[2465]: I0913 00:05:45.273598 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c5t74" podStartSLOduration=21.221088533 podStartE2EDuration="30.27358146s" podCreationTimestamp="2025-09-13 00:05:15 +0000 UTC" firstStartedPulling="2025-09-13 00:05:35.293542502 +0000 UTC m=+40.391189025" lastFinishedPulling="2025-09-13 00:05:44.346035469 +0000 UTC m=+49.443681952" observedRunningTime="2025-09-13 00:05:45.271963926 +0000 UTC m=+50.369610449" watchObservedRunningTime="2025-09-13 00:05:45.27358146 +0000 UTC m=+50.371227943" Sep 13 00:05:48.121927 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:38032.service - OpenSSH per-connection server daemon (10.0.0.1:38032). Sep 13 00:05:48.202781 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 38032 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:48.205033 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:48.211629 systemd-logind[1428]: New session 9 of user core. Sep 13 00:05:48.220246 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:05:48.537879 sshd[5389]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:48.541254 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:38032.service: Deactivated successfully. Sep 13 00:05:48.545457 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:05:48.546465 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:05:48.548707 systemd-logind[1428]: Removed session 9. Sep 13 00:05:53.551179 systemd[1]: Started sshd@9-10.0.0.78:22-10.0.0.1:46428.service - OpenSSH per-connection server daemon (10.0.0.1:46428). Sep 13 00:05:53.595222 sshd[5437]: Accepted publickey for core from 10.0.0.1 port 46428 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:53.597444 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:53.605918 systemd-logind[1428]: New session 10 of user core. Sep 13 00:05:53.613266 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:05:53.760554 sshd[5437]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:53.768586 systemd[1]: sshd@9-10.0.0.78:22-10.0.0.1:46428.service: Deactivated successfully. Sep 13 00:05:53.770774 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:05:53.772338 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:05:53.780353 systemd[1]: Started sshd@10-10.0.0.78:22-10.0.0.1:46434.service - OpenSSH per-connection server daemon (10.0.0.1:46434). Sep 13 00:05:53.781264 systemd-logind[1428]: Removed session 10. Sep 13 00:05:53.814836 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 46434 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:53.816686 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:53.820501 systemd-logind[1428]: New session 11 of user core. Sep 13 00:05:53.831219 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:05:54.082883 sshd[5454]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:54.093856 systemd[1]: sshd@10-10.0.0.78:22-10.0.0.1:46434.service: Deactivated successfully. Sep 13 00:05:54.095568 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:05:54.097035 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:05:54.106671 systemd[1]: Started sshd@11-10.0.0.78:22-10.0.0.1:46442.service - OpenSSH per-connection server daemon (10.0.0.1:46442). Sep 13 00:05:54.109023 systemd-logind[1428]: Removed session 11. Sep 13 00:05:54.149763 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 46442 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:54.151072 sshd[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:54.155165 systemd-logind[1428]: New session 12 of user core. Sep 13 00:05:54.159192 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:05:54.307199 sshd[5467]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:54.310760 systemd[1]: sshd@11-10.0.0.78:22-10.0.0.1:46442.service: Deactivated successfully. Sep 13 00:05:54.312725 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:05:54.313369 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:05:54.314194 systemd-logind[1428]: Removed session 12. Sep 13 00:05:54.997810 containerd[1445]: time="2025-09-13T00:05:54.997772860Z" level=info msg="StopPodSandbox for \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\"" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.035 [WARNING][5515] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" WorkloadEndpoint="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.035 [INFO][5515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.035 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" iface="eth0" netns="" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.035 [INFO][5515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.035 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.060 [INFO][5524] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.060 [INFO][5524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.060 [INFO][5524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.071 [WARNING][5524] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.071 [INFO][5524] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.073 [INFO][5524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.076728 containerd[1445]: 2025-09-13 00:05:55.075 [INFO][5515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.077172 containerd[1445]: time="2025-09-13T00:05:55.076791461Z" level=info msg="TearDown network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\" successfully" Sep 13 00:05:55.077172 containerd[1445]: time="2025-09-13T00:05:55.076823101Z" level=info msg="StopPodSandbox for \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\" returns successfully" Sep 13 00:05:55.077517 containerd[1445]: time="2025-09-13T00:05:55.077491466Z" level=info msg="RemovePodSandbox for \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\"" Sep 13 00:05:55.089789 containerd[1445]: time="2025-09-13T00:05:55.089730839Z" level=info msg="Forcibly stopping sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\"" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.124 [WARNING][5542] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" WorkloadEndpoint="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.124 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.124 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" iface="eth0" netns="" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.124 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.124 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.143 [INFO][5551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.143 [INFO][5551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.143 [INFO][5551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.152 [WARNING][5551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.152 [INFO][5551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" HandleID="k8s-pod-network.b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Workload="localhost-k8s-whisker--85f66bff6f--klv7r-eth0" Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.154 [INFO][5551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.158017 containerd[1445]: 2025-09-13 00:05:55.156 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1" Sep 13 00:05:55.158799 containerd[1445]: time="2025-09-13T00:05:55.158406681Z" level=info msg="TearDown network for sandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\" successfully" Sep 13 00:05:55.177895 containerd[1445]: time="2025-09-13T00:05:55.177806909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:55.178061 containerd[1445]: time="2025-09-13T00:05:55.177906590Z" level=info msg="RemovePodSandbox \"b7d383f3e115da9b85c2e843dadcc0ca39b52655c535544e004c4b1fa3436ae1\" returns successfully" Sep 13 00:05:55.178640 containerd[1445]: time="2025-09-13T00:05:55.178560115Z" level=info msg="StopPodSandbox for \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\"" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.220 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5t74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d15f87e-d4b3-4e20-9451-06b0fba27ad4", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61", Pod:"csi-node-driver-c5t74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a9887bd885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.220 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.220 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" iface="eth0" netns="" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.220 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.220 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.240 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.240 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.240 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.248 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.249 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.251 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.254646 containerd[1445]: 2025-09-13 00:05:55.252 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.254646 containerd[1445]: time="2025-09-13T00:05:55.254615053Z" level=info msg="TearDown network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\" successfully" Sep 13 00:05:55.254646 containerd[1445]: time="2025-09-13T00:05:55.254638733Z" level=info msg="StopPodSandbox for \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\" returns successfully" Sep 13 00:05:55.255673 containerd[1445]: time="2025-09-13T00:05:55.255398099Z" level=info msg="RemovePodSandbox for \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\"" Sep 13 00:05:55.255673 containerd[1445]: time="2025-09-13T00:05:55.255428739Z" level=info msg="Forcibly stopping sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\"" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.292 [WARNING][5595] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5t74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d15f87e-d4b3-4e20-9451-06b0fba27ad4", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97dc01b4fc1e50dc7f3d0b1fe731753d253af6f7118924117d7b1ebc3c26ac61", Pod:"csi-node-driver-c5t74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a9887bd885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.292 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.292 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" iface="eth0" netns="" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.292 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.292 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.314 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.314 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.314 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.324 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.324 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" HandleID="k8s-pod-network.583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Workload="localhost-k8s-csi--node--driver--c5t74-eth0" Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.326 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.330070 containerd[1445]: 2025-09-13 00:05:55.328 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2" Sep 13 00:05:55.330607 containerd[1445]: time="2025-09-13T00:05:55.330098586Z" level=info msg="TearDown network for sandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\" successfully" Sep 13 00:05:55.334932 containerd[1445]: time="2025-09-13T00:05:55.334879543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:55.335068 containerd[1445]: time="2025-09-13T00:05:55.334960703Z" level=info msg="RemovePodSandbox \"583c4c8c8b1ded56479095bfa88ab860497207a099d990004f5fea23fb8694b2\" returns successfully" Sep 13 00:05:55.335988 containerd[1445]: time="2025-09-13T00:05:55.335524308Z" level=info msg="StopPodSandbox for \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\"" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.370 [WARNING][5622] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af8d6313-8016-4b63-b286-8fb59033218e", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8", Pod:"coredns-7c65d6cfc9-42f6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287d90e9f45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.371 [INFO][5622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.371 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" iface="eth0" netns="" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.371 [INFO][5622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.371 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.389 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.389 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.389 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.398 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.398 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.399 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.402676 containerd[1445]: 2025-09-13 00:05:55.401 [INFO][5622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.403287 containerd[1445]: time="2025-09-13T00:05:55.403165502Z" level=info msg="TearDown network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\" successfully" Sep 13 00:05:55.403287 containerd[1445]: time="2025-09-13T00:05:55.403195622Z" level=info msg="StopPodSandbox for \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\" returns successfully" Sep 13 00:05:55.403747 containerd[1445]: time="2025-09-13T00:05:55.403700226Z" level=info msg="RemovePodSandbox for \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\"" Sep 13 00:05:55.403747 containerd[1445]: time="2025-09-13T00:05:55.403745186Z" level=info msg="Forcibly stopping sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\"" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.437 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af8d6313-8016-4b63-b286-8fb59033218e", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2adb17b03fa4633365078616e3c814f85e081d7512dfa55a37496e9b72c7d8d8", Pod:"coredns-7c65d6cfc9-42f6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287d90e9f45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.438 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.438 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" iface="eth0" netns="" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.438 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.438 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.461 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.461 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.461 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.471 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.471 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" HandleID="k8s-pod-network.9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Workload="localhost-k8s-coredns--7c65d6cfc9--42f6q-eth0" Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.473 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.476419 containerd[1445]: 2025-09-13 00:05:55.474 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3" Sep 13 00:05:55.476841 containerd[1445]: time="2025-09-13T00:05:55.476473379Z" level=info msg="TearDown network for sandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\" successfully" Sep 13 00:05:55.479521 containerd[1445]: time="2025-09-13T00:05:55.479470322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:55.479607 containerd[1445]: time="2025-09-13T00:05:55.479542042Z" level=info msg="RemovePodSandbox \"9e48784fbd01b12fb930e81609a0e90850f2af4fd205902c892058b10a244ba3\" returns successfully" Sep 13 00:05:55.480344 containerd[1445]: time="2025-09-13T00:05:55.480016406Z" level=info msg="StopPodSandbox for \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\"" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.527 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5742ff1-d3ec-4bea-89be-500213d11650", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f", Pod:"calico-apiserver-5c55569db5-8wwmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66354e57ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.528 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.528 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" iface="eth0" netns="" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.528 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.528 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.548 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.548 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.548 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.558 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.558 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.560 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.563679 containerd[1445]: 2025-09-13 00:05:55.561 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.563679 containerd[1445]: time="2025-09-13T00:05:55.563635042Z" level=info msg="TearDown network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\" successfully" Sep 13 00:05:55.563679 containerd[1445]: time="2025-09-13T00:05:55.563660042Z" level=info msg="StopPodSandbox for \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\" returns successfully" Sep 13 00:05:55.565573 containerd[1445]: time="2025-09-13T00:05:55.565533296Z" level=info msg="RemovePodSandbox for \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\"" Sep 13 00:05:55.565573 containerd[1445]: time="2025-09-13T00:05:55.565569496Z" level=info msg="Forcibly stopping sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\"" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.602 [WARNING][5699] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5742ff1-d3ec-4bea-89be-500213d11650", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2884bc637a528234faccc1be624273b0757dbdd397ac16191eb240554f89b3f", Pod:"calico-apiserver-5c55569db5-8wwmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66354e57ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.602 [INFO][5699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.602 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" iface="eth0" netns="" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.602 [INFO][5699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.602 [INFO][5699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.624 [INFO][5707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.624 [INFO][5707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.624 [INFO][5707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.635 [WARNING][5707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.635 [INFO][5707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" HandleID="k8s-pod-network.6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Workload="localhost-k8s-calico--apiserver--5c55569db5--8wwmc-eth0" Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.636 [INFO][5707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.640027 containerd[1445]: 2025-09-13 00:05:55.638 [INFO][5699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82" Sep 13 00:05:55.640471 containerd[1445]: time="2025-09-13T00:05:55.640145263Z" level=info msg="TearDown network for sandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\" successfully" Sep 13 00:05:55.645649 containerd[1445]: time="2025-09-13T00:05:55.645602185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:55.645724 containerd[1445]: time="2025-09-13T00:05:55.645680105Z" level=info msg="RemovePodSandbox \"6de516839d612de15df0cc2091a856ecc1778c0b28310058851374de45cdfc82\" returns successfully" Sep 13 00:05:55.646508 containerd[1445]: time="2025-09-13T00:05:55.646183629Z" level=info msg="StopPodSandbox for \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\"" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.689 [WARNING][5725] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37010d28-d42d-4197-85d1-065db74b5133", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423", Pod:"coredns-7c65d6cfc9-lmfsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a5598628df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.690 [INFO][5725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.690 [INFO][5725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" iface="eth0" netns="" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.690 [INFO][5725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.690 [INFO][5725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.712 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.712 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.712 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.721 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.722 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.725 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.729022 containerd[1445]: 2025-09-13 00:05:55.727 [INFO][5725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.730235 containerd[1445]: time="2025-09-13T00:05:55.729081859Z" level=info msg="TearDown network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\" successfully" Sep 13 00:05:55.730235 containerd[1445]: time="2025-09-13T00:05:55.729109019Z" level=info msg="StopPodSandbox for \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\" returns successfully" Sep 13 00:05:55.730235 containerd[1445]: time="2025-09-13T00:05:55.729633183Z" level=info msg="RemovePodSandbox for \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\"" Sep 13 00:05:55.730235 containerd[1445]: time="2025-09-13T00:05:55.729676024Z" level=info msg="Forcibly stopping sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\"" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.767 [WARNING][5752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37010d28-d42d-4197-85d1-065db74b5133", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4d0eca9eddaf6237b902c8cfdd8443ebc4c9a3f3393ce27703f19e56fd9d423", Pod:"coredns-7c65d6cfc9-lmfsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a5598628df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.767 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.767 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" iface="eth0" netns="" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.767 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.767 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.785 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.786 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.786 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.794 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.794 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" HandleID="k8s-pod-network.533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Workload="localhost-k8s-coredns--7c65d6cfc9--lmfsr-eth0" Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.795 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.799352 containerd[1445]: 2025-09-13 00:05:55.797 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77" Sep 13 00:05:55.801670 containerd[1445]: time="2025-09-13T00:05:55.799979118Z" level=info msg="TearDown network for sandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\" successfully" Sep 13 00:05:55.803567 containerd[1445]: time="2025-09-13T00:05:55.803528745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:55.803856 containerd[1445]: time="2025-09-13T00:05:55.803797467Z" level=info msg="RemovePodSandbox \"533b22f9e78e4d7b3b5cb84f8f7e58133c689c58df0bb37ed7147c70ef02ad77\" returns successfully" Sep 13 00:05:55.804544 containerd[1445]: time="2025-09-13T00:05:55.804506153Z" level=info msg="StopPodSandbox for \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\"" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.841 [WARNING][5779] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--snwp2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fa29da06-fe61-4e04-b4a0-a7f57e663902", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec", Pod:"goldmane-7988f88666-snwp2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1dfcee83e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.841 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.841 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" iface="eth0" netns="" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.841 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.841 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.859 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.860 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.860 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.869 [WARNING][5788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.869 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.872 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.875850 containerd[1445]: 2025-09-13 00:05:55.874 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.876422 containerd[1445]: time="2025-09-13T00:05:55.875886255Z" level=info msg="TearDown network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\" successfully" Sep 13 00:05:55.876422 containerd[1445]: time="2025-09-13T00:05:55.875911575Z" level=info msg="StopPodSandbox for \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\" returns successfully" Sep 13 00:05:55.876927 containerd[1445]: time="2025-09-13T00:05:55.876642021Z" level=info msg="RemovePodSandbox for \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\"" Sep 13 00:05:55.876927 containerd[1445]: time="2025-09-13T00:05:55.876675581Z" level=info msg="Forcibly stopping sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\"" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.928 [WARNING][5806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--snwp2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fa29da06-fe61-4e04-b4a0-a7f57e663902", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f45708abc4d961a25fd2c8ef9d13378c7c6692f536bee2bd42b1d8437ef511ec", Pod:"goldmane-7988f88666-snwp2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1dfcee83e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.928 [INFO][5806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.928 [INFO][5806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" iface="eth0" netns="" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.928 [INFO][5806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.928 [INFO][5806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.947 [INFO][5815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.947 [INFO][5815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.947 [INFO][5815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.957 [WARNING][5815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.957 [INFO][5815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" HandleID="k8s-pod-network.7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Workload="localhost-k8s-goldmane--7988f88666--snwp2-eth0" Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.960 [INFO][5815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.963385 containerd[1445]: 2025-09-13 00:05:55.961 [INFO][5806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07" Sep 13 00:05:55.965342 containerd[1445]: time="2025-09-13T00:05:55.963858724Z" level=info msg="TearDown network for sandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\" successfully" Sep 13 00:05:55.966754 containerd[1445]: time="2025-09-13T00:05:55.966719826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:55.967014 containerd[1445]: time="2025-09-13T00:05:55.966994388Z" level=info msg="RemovePodSandbox \"7a815fb2c1d11ffe4c6db40c0e7feb3a8e0369de8b9353664f4eb60616013b07\" returns successfully" Sep 13 00:05:55.967733 containerd[1445]: time="2025-09-13T00:05:55.967706753Z" level=info msg="StopPodSandbox for \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\"" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.004 [WARNING][5833] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0", GenerateName:"calico-kube-controllers-5f5b759db5-", Namespace:"calico-system", SelfLink:"", UID:"071f06bb-6417-46e6-bfae-280329d73932", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5b759db5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4", Pod:"calico-kube-controllers-5f5b759db5-cjrjx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f426cfbf15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.004 [INFO][5833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.004 [INFO][5833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" iface="eth0" netns="" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.004 [INFO][5833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.004 [INFO][5833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.024 [INFO][5842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.024 [INFO][5842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.024 [INFO][5842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.033 [WARNING][5842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.033 [INFO][5842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.035 [INFO][5842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.039160 containerd[1445]: 2025-09-13 00:05:56.037 [INFO][5833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.039806 containerd[1445]: time="2025-09-13T00:05:56.039256294Z" level=info msg="TearDown network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\" successfully" Sep 13 00:05:56.039806 containerd[1445]: time="2025-09-13T00:05:56.039283294Z" level=info msg="StopPodSandbox for \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\" returns successfully" Sep 13 00:05:56.039806 containerd[1445]: time="2025-09-13T00:05:56.039769138Z" level=info msg="RemovePodSandbox for \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\"" Sep 13 00:05:56.039806 containerd[1445]: time="2025-09-13T00:05:56.039798938Z" level=info msg="Forcibly stopping sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\"" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.074 [WARNING][5860] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0", GenerateName:"calico-kube-controllers-5f5b759db5-", Namespace:"calico-system", SelfLink:"", UID:"071f06bb-6417-46e6-bfae-280329d73932", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5b759db5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bca6daa54bb0ac74f3d807f899d07d9bac672e8ce1ad857a1f562a857d57ebb4", Pod:"calico-kube-controllers-5f5b759db5-cjrjx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f426cfbf15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.075 [INFO][5860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.075 [INFO][5860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" iface="eth0" netns="" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.075 [INFO][5860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.075 [INFO][5860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.093 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.093 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.093 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.102 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.102 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" HandleID="k8s-pod-network.039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Workload="localhost-k8s-calico--kube--controllers--5f5b759db5--cjrjx-eth0" Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.104 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.107444 containerd[1445]: 2025-09-13 00:05:56.105 [INFO][5860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063" Sep 13 00:05:56.107852 containerd[1445]: time="2025-09-13T00:05:56.107497567Z" level=info msg="TearDown network for sandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\" successfully" Sep 13 00:05:56.115200 containerd[1445]: time="2025-09-13T00:05:56.115017264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:56.115768 containerd[1445]: time="2025-09-13T00:05:56.115734669Z" level=info msg="RemovePodSandbox \"039570460b488cbd7fc1518e41d710d759d0c230a63d43ee226832fcd3ef7063\" returns successfully" Sep 13 00:05:56.116726 containerd[1445]: time="2025-09-13T00:05:56.116517995Z" level=info msg="StopPodSandbox for \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\"" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.156 [WARNING][5887] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7691cf75-c4db-4e69-bd59-9bd4189e2702", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d", Pod:"calico-apiserver-5c55569db5-8jkvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6dcff6e4d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.156 [INFO][5887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.156 [INFO][5887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" iface="eth0" netns="" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.156 [INFO][5887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.156 [INFO][5887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.175 [INFO][5896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.175 [INFO][5896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.175 [INFO][5896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.183 [WARNING][5896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.183 [INFO][5896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.185 [INFO][5896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.189288 containerd[1445]: 2025-09-13 00:05:56.187 [INFO][5887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.191237 containerd[1445]: time="2025-09-13T00:05:56.191084596Z" level=info msg="TearDown network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\" successfully" Sep 13 00:05:56.191237 containerd[1445]: time="2025-09-13T00:05:56.191133396Z" level=info msg="StopPodSandbox for \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\" returns successfully" Sep 13 00:05:56.191626 containerd[1445]: time="2025-09-13T00:05:56.191599080Z" level=info msg="RemovePodSandbox for \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\"" Sep 13 00:05:56.191695 containerd[1445]: time="2025-09-13T00:05:56.191676480Z" level=info msg="Forcibly stopping sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\"" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.226 [WARNING][5913] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0", GenerateName:"calico-apiserver-5c55569db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7691cf75-c4db-4e69-bd59-9bd4189e2702", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55569db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d242ffcf14fe964a2f0be295908ef812477a9452e43b99f4d8ff0575699d86d", Pod:"calico-apiserver-5c55569db5-8jkvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6dcff6e4d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.226 [INFO][5913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.226 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" iface="eth0" netns="" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.226 [INFO][5913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.226 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.244 [INFO][5922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.244 [INFO][5922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.244 [INFO][5922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.253 [WARNING][5922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.253 [INFO][5922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" HandleID="k8s-pod-network.0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Workload="localhost-k8s-calico--apiserver--5c55569db5--8jkvx-eth0" Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.254 [INFO][5922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.258181 containerd[1445]: 2025-09-13 00:05:56.256 [INFO][5913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e" Sep 13 00:05:56.258588 containerd[1445]: time="2025-09-13T00:05:56.258225821Z" level=info msg="TearDown network for sandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\" successfully" Sep 13 00:05:56.261510 containerd[1445]: time="2025-09-13T00:05:56.261474845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:05:56.261577 containerd[1445]: time="2025-09-13T00:05:56.261546806Z" level=info msg="RemovePodSandbox \"0d59a95b639d24d935927760072b13fbd20334984939e143b722f624f63e136e\" returns successfully" Sep 13 00:05:59.325823 systemd[1]: Started sshd@12-10.0.0.78:22-10.0.0.1:46446.service - OpenSSH per-connection server daemon (10.0.0.1:46446). Sep 13 00:05:59.371975 sshd[5953]: Accepted publickey for core from 10.0.0.1 port 46446 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:59.373663 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:59.377998 systemd-logind[1428]: New session 13 of user core. Sep 13 00:05:59.382201 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:05:59.539534 sshd[5953]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:59.557119 systemd[1]: sshd@12-10.0.0.78:22-10.0.0.1:46446.service: Deactivated successfully. Sep 13 00:05:59.559695 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:05:59.561245 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:05:59.567358 systemd[1]: Started sshd@13-10.0.0.78:22-10.0.0.1:46450.service - OpenSSH per-connection server daemon (10.0.0.1:46450). Sep 13 00:05:59.568716 systemd-logind[1428]: Removed session 13. Sep 13 00:05:59.608863 sshd[5968]: Accepted publickey for core from 10.0.0.1 port 46450 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:59.610451 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:59.614684 systemd-logind[1428]: New session 14 of user core. Sep 13 00:05:59.620212 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:05:59.822957 sshd[5968]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:59.838975 systemd[1]: sshd@13-10.0.0.78:22-10.0.0.1:46450.service: Deactivated successfully. Sep 13 00:05:59.840637 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:05:59.842581 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:05:59.853271 systemd[1]: Started sshd@14-10.0.0.78:22-10.0.0.1:46458.service - OpenSSH per-connection server daemon (10.0.0.1:46458). Sep 13 00:05:59.854583 systemd-logind[1428]: Removed session 14. Sep 13 00:05:59.891266 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 46458 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:05:59.892759 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:59.897658 systemd-logind[1428]: New session 15 of user core. Sep 13 00:05:59.903197 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:06:01.474473 sshd[5980]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:01.486978 systemd[1]: sshd@14-10.0.0.78:22-10.0.0.1:46458.service: Deactivated successfully. Sep 13 00:06:01.490564 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:06:01.494709 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:06:01.503242 systemd[1]: Started sshd@15-10.0.0.78:22-10.0.0.1:37006.service - OpenSSH per-connection server daemon (10.0.0.1:37006). Sep 13 00:06:01.505957 systemd-logind[1428]: Removed session 15. Sep 13 00:06:01.539025 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 37006 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:01.540580 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:01.544708 systemd-logind[1428]: New session 16 of user core. Sep 13 00:06:01.552224 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:06:02.071964 sshd[6005]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:02.083648 systemd[1]: sshd@15-10.0.0.78:22-10.0.0.1:37006.service: Deactivated successfully. Sep 13 00:06:02.086901 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:06:02.088498 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:06:02.102389 systemd[1]: Started sshd@16-10.0.0.78:22-10.0.0.1:37018.service - OpenSSH per-connection server daemon (10.0.0.1:37018). Sep 13 00:06:02.103600 systemd-logind[1428]: Removed session 16. Sep 13 00:06:02.137431 sshd[6020]: Accepted publickey for core from 10.0.0.1 port 37018 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:02.138860 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:02.143259 systemd-logind[1428]: New session 17 of user core. Sep 13 00:06:02.154249 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:06:02.278117 sshd[6020]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:02.281472 systemd[1]: sshd@16-10.0.0.78:22-10.0.0.1:37018.service: Deactivated successfully. Sep 13 00:06:02.283282 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:06:02.283904 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:06:02.284866 systemd-logind[1428]: Removed session 17. Sep 13 00:06:07.289132 systemd[1]: Started sshd@17-10.0.0.78:22-10.0.0.1:37026.service - OpenSSH per-connection server daemon (10.0.0.1:37026). Sep 13 00:06:07.330474 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 37026 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:07.331978 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:07.335901 systemd-logind[1428]: New session 18 of user core. Sep 13 00:06:07.345239 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:06:07.481969 sshd[6058]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:07.485814 systemd[1]: sshd@17-10.0.0.78:22-10.0.0.1:37026.service: Deactivated successfully. Sep 13 00:06:07.487578 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:06:07.489134 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:06:07.489983 systemd-logind[1428]: Removed session 18. Sep 13 00:06:12.498240 systemd[1]: Started sshd@18-10.0.0.78:22-10.0.0.1:45324.service - OpenSSH per-connection server daemon (10.0.0.1:45324). Sep 13 00:06:12.558960 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 45324 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:12.560331 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:12.564337 systemd-logind[1428]: New session 19 of user core. Sep 13 00:06:12.572252 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:06:12.723755 sshd[6078]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:12.727841 systemd[1]: sshd@18-10.0.0.78:22-10.0.0.1:45324.service: Deactivated successfully. Sep 13 00:06:12.731419 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:06:12.732370 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:06:12.733266 systemd-logind[1428]: Removed session 19. Sep 13 00:06:12.989004 kubelet[2465]: E0913 00:06:12.988490 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:15.757539 kubelet[2465]: I0913 00:06:15.757501 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:06:17.736934 systemd[1]: Started sshd@19-10.0.0.78:22-10.0.0.1:45336.service - OpenSSH per-connection server daemon (10.0.0.1:45336). Sep 13 00:06:17.791138 sshd[6116]: Accepted publickey for core from 10.0.0.1 port 45336 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:17.792808 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.799145 systemd-logind[1428]: New session 20 of user core. Sep 13 00:06:17.807261 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:06:17.989462 kubelet[2465]: E0913 00:06:17.989333 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:18.019223 sshd[6116]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:18.022657 systemd[1]: sshd@19-10.0.0.78:22-10.0.0.1:45336.service: Deactivated successfully. Sep 13 00:06:18.024792 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:06:18.025756 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:06:18.026878 systemd-logind[1428]: Removed session 20.