Sep 6 00:08:31.846079 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 00:08:31.846102 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 6 00:08:31.846120 kernel: KASLR enabled Sep 6 00:08:31.846126 kernel: efi: EFI v2.7 by EDK II Sep 6 00:08:31.846132 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 6 00:08:31.846138 kernel: random: crng init done Sep 6 00:08:31.846145 kernel: ACPI: Early table checksum verification disabled Sep 6 00:08:31.846151 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 6 00:08:31.846157 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:08:31.846164 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846170 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846176 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846182 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846188 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846196 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846203 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846210 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846216 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:08:31.846222 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 6 00:08:31.846229 kernel: NUMA: Failed to initialise from firmware Sep 6 00:08:31.846235 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:08:31.846241 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 6 00:08:31.846247 kernel: Zone ranges: Sep 6 00:08:31.846253 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:08:31.846260 kernel: DMA32 empty Sep 6 00:08:31.846268 kernel: Normal empty Sep 6 00:08:31.846274 kernel: Movable zone start for each node Sep 6 00:08:31.846280 kernel: Early memory node ranges Sep 6 00:08:31.846286 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 6 00:08:31.846292 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 6 00:08:31.846299 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 6 00:08:31.846305 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 6 00:08:31.846311 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 6 00:08:31.846318 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 6 00:08:31.846324 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 6 00:08:31.846330 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:08:31.846337 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 6 00:08:31.846345 kernel: psci: probing for conduit method from ACPI. Sep 6 00:08:31.846351 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 00:08:31.846358 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:08:31.846367 kernel: psci: Trusted OS migration not required Sep 6 00:08:31.846388 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:08:31.846395 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 00:08:31.846403 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 6 00:08:31.846410 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 6 00:08:31.846417 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 6 00:08:31.846424 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:08:31.846431 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:08:31.846437 kernel: CPU features: detected: Hardware dirty bit management Sep 6 00:08:31.846444 kernel: CPU features: detected: Spectre-v4 Sep 6 00:08:31.846450 kernel: CPU features: detected: Spectre-BHB Sep 6 00:08:31.846457 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:08:31.846464 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:08:31.846472 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 00:08:31.846479 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 00:08:31.846487 kernel: alternatives: applying boot alternatives Sep 6 00:08:31.846494 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:08:31.846502 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:08:31.846508 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:08:31.846515 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:08:31.846522 kernel: Fallback order for Node 0: 0 Sep 6 00:08:31.846529 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 6 00:08:31.846535 kernel: Policy zone: DMA Sep 6 00:08:31.846542 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:08:31.846550 kernel: software IO TLB: area num 4. Sep 6 00:08:31.846557 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 6 00:08:31.846564 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 6 00:08:31.846571 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:08:31.846577 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:08:31.846584 kernel: rcu: RCU event tracing is enabled. Sep 6 00:08:31.846592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:08:31.846599 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:08:31.846606 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:08:31.846612 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:08:31.846619 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:08:31.846627 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:08:31.846634 kernel: GICv3: 256 SPIs implemented Sep 6 00:08:31.846641 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:08:31.846647 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:08:31.846654 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 6 00:08:31.846661 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 00:08:31.846667 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 00:08:31.846674 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:08:31.846681 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:08:31.846687 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 6 00:08:31.846694 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 6 00:08:31.846701 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 6 00:08:31.846709 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:08:31.846715 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 00:08:31.846722 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 00:08:31.846739 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 00:08:31.846746 kernel: arm-pv: using stolen time PV Sep 6 00:08:31.846754 kernel: Console: colour dummy device 80x25 Sep 6 00:08:31.846761 kernel: ACPI: Core revision 20230628 Sep 6 00:08:31.846768 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 00:08:31.846775 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:08:31.846782 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 6 00:08:31.846790 kernel: landlock: Up and running. Sep 6 00:08:31.846797 kernel: SELinux: Initializing. Sep 6 00:08:31.846804 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:08:31.846811 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:08:31.846818 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 00:08:31.846825 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 00:08:31.846832 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:08:31.846839 kernel: rcu: Max phase no-delay instances is 400. Sep 6 00:08:31.846846 kernel: Platform MSI: ITS@0x8080000 domain created Sep 6 00:08:31.846855 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 6 00:08:31.846861 kernel: Remapping and enabling EFI services. Sep 6 00:08:31.846868 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:08:31.846875 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:08:31.846882 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 00:08:31.846889 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 6 00:08:31.846896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:08:31.846903 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 00:08:31.846910 kernel: Detected PIPT I-cache on CPU2 Sep 6 00:08:31.846916 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 6 00:08:31.846925 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 6 00:08:31.846932 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:08:31.846944 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 6 00:08:31.846952 kernel: Detected PIPT I-cache on CPU3 Sep 6 00:08:31.846960 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 6 00:08:31.846967 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 6 00:08:31.846974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:08:31.846981 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 6 00:08:31.846989 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:08:31.846997 kernel: SMP: Total of 4 processors activated. Sep 6 00:08:31.847005 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:08:31.847012 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 00:08:31.847019 kernel: CPU features: detected: Common not Private translations Sep 6 00:08:31.847027 kernel: CPU features: detected: CRC32 instructions Sep 6 00:08:31.847034 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 6 00:08:31.847041 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 00:08:31.847049 kernel: CPU features: detected: LSE atomic instructions Sep 6 00:08:31.847058 kernel: CPU features: detected: Privileged Access Never Sep 6 00:08:31.847065 kernel: CPU features: detected: RAS Extension Support Sep 6 00:08:31.847072 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 00:08:31.847079 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:08:31.847086 kernel: alternatives: applying system-wide alternatives Sep 6 00:08:31.847093 kernel: devtmpfs: initialized Sep 6 00:08:31.847101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:08:31.847109 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:08:31.847121 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:08:31.847130 kernel: SMBIOS 3.0.0 present. Sep 6 00:08:31.847138 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 6 00:08:31.847145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:08:31.847152 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:08:31.847160 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:08:31.847167 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:08:31.847174 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:08:31.847182 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 6 00:08:31.847189 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:08:31.847198 kernel: cpuidle: using governor menu Sep 6 00:08:31.847205 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:08:31.847213 kernel: ASID allocator initialised with 32768 entries Sep 6 00:08:31.847220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:08:31.847227 kernel: Serial: AMBA PL011 UART driver Sep 6 00:08:31.847235 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 6 00:08:31.847242 kernel: Modules: 0 pages in range for non-PLT usage Sep 6 00:08:31.847249 kernel: Modules: 509008 pages in range for PLT usage Sep 6 00:08:31.847256 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:08:31.847265 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 6 00:08:31.847272 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:08:31.847280 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 6 00:08:31.847287 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:08:31.847294 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 6 00:08:31.847301 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:08:31.847308 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 6 00:08:31.847315 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:08:31.847322 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:08:31.847331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:08:31.847338 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:08:31.847345 kernel: ACPI: Interpreter enabled Sep 6 00:08:31.847353 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:08:31.847360 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:08:31.847367 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 00:08:31.847374 kernel: printk: console [ttyAMA0] enabled Sep 6 00:08:31.847382 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:08:31.847532 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:08:31.847611 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:08:31.847678 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:08:31.847777 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 00:08:31.847844 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 00:08:31.847854 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 00:08:31.847862 kernel: PCI host bridge to bus 0000:00 Sep 6 00:08:31.847933 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 00:08:31.847997 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:08:31.848055 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 00:08:31.848141 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:08:31.848230 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 6 00:08:31.848309 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:08:31.848378 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 6 00:08:31.848448 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 6 00:08:31.848514 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:08:31.848581 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:08:31.848646 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 6 00:08:31.848713 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 6 00:08:31.848787 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 00:08:31.848846 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:08:31.848911 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 00:08:31.848921 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:08:31.848928 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:08:31.848936 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:08:31.848944 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:08:31.848951 kernel: iommu: Default domain type: Translated Sep 6 00:08:31.848958 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:08:31.848966 kernel: efivars: Registered efivars operations Sep 6 00:08:31.848973 kernel: vgaarb: loaded Sep 6 00:08:31.848982 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:08:31.848990 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:08:31.848997 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:08:31.849004 kernel: pnp: PnP ACPI init Sep 6 00:08:31.849084 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 00:08:31.849095 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:08:31.849103 kernel: NET: Registered PF_INET protocol family Sep 6 00:08:31.849110 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:08:31.849127 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:08:31.849136 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:08:31.849143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:08:31.849150 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 6 00:08:31.849158 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:08:31.849165 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:08:31.849173 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:08:31.849180 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:08:31.849187 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:08:31.849196 kernel: kvm [1]: HYP mode not available Sep 6 00:08:31.849203 kernel: Initialise system trusted keyrings Sep 6 00:08:31.849211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:08:31.849218 kernel: Key type asymmetric registered Sep 6 00:08:31.849225 kernel: Asymmetric key parser 'x509' registered Sep 6 00:08:31.849232 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 6 00:08:31.849240 kernel: io scheduler mq-deadline registered Sep 6 00:08:31.849247 kernel: io scheduler kyber registered Sep 6 00:08:31.849254 kernel: io scheduler bfq registered Sep 6 00:08:31.849263 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:08:31.849270 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:08:31.849278 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:08:31.849353 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 6 00:08:31.849363 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:08:31.849371 kernel: thunder_xcv, ver 1.0 Sep 6 00:08:31.849378 kernel: thunder_bgx, ver 1.0 Sep 6 00:08:31.849386 kernel: nicpf, ver 1.0 Sep 6 00:08:31.849393 kernel: nicvf, ver 1.0 Sep 6 00:08:31.849470 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:08:31.849535 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:08:31 UTC (1757117311) Sep 6 00:08:31.849545 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:08:31.849553 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 6 00:08:31.849560 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 6 00:08:31.849567 kernel: watchdog: Hard watchdog permanently disabled Sep 6 00:08:31.849575 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:08:31.849582 kernel: Segment Routing with IPv6 Sep 6 00:08:31.849591 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:08:31.849599 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:08:31.849606 kernel: Key type dns_resolver registered Sep 6 00:08:31.849613 kernel: registered taskstats version 1 Sep 6 00:08:31.849621 kernel: Loading compiled-in X.509 certificates Sep 6 00:08:31.849628 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 6 00:08:31.849635 kernel: Key type .fscrypt registered Sep 6 00:08:31.849642 kernel: Key type fscrypt-provisioning registered Sep 6 00:08:31.849650 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:08:31.849658 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:08:31.849666 kernel: ima: No architecture policies found Sep 6 00:08:31.849673 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:08:31.849680 kernel: clk: Disabling unused clocks Sep 6 00:08:31.849688 kernel: Freeing unused kernel memory: 39424K Sep 6 00:08:31.849695 kernel: Run /init as init process Sep 6 00:08:31.849702 kernel: with arguments: Sep 6 00:08:31.849709 kernel: /init Sep 6 00:08:31.849717 kernel: with environment: Sep 6 00:08:31.849725 kernel: HOME=/ Sep 6 00:08:31.849749 kernel: TERM=linux Sep 6 00:08:31.849757 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:08:31.849766 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:08:31.849775 systemd[1]: Detected virtualization kvm. Sep 6 00:08:31.849783 systemd[1]: Detected architecture arm64. Sep 6 00:08:31.849791 systemd[1]: Running in initrd. Sep 6 00:08:31.849798 systemd[1]: No hostname configured, using default hostname. Sep 6 00:08:31.849808 systemd[1]: Hostname set to . Sep 6 00:08:31.849816 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:08:31.849824 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:08:31.849832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:08:31.849840 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:08:31.849849 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 6 00:08:31.849857 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:08:31.849866 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 6 00:08:31.849875 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 6 00:08:31.849884 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 6 00:08:31.849892 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 6 00:08:31.849900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:08:31.849908 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:08:31.849916 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:08:31.849925 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:08:31.849933 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:08:31.849941 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:08:31.849949 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:08:31.849957 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:08:31.849964 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 00:08:31.849972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 6 00:08:31.849980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:08:31.849988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:08:31.849997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:08:31.850005 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:08:31.850013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 6 00:08:31.850021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:08:31.850029 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 6 00:08:31.850037 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:08:31.850045 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:08:31.850052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:08:31.850062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:08:31.850070 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 6 00:08:31.850077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:08:31.850085 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:08:31.850117 systemd-journald[237]: Collecting audit messages is disabled. Sep 6 00:08:31.850141 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 00:08:31.850149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:08:31.850158 systemd-journald[237]: Journal started Sep 6 00:08:31.850178 systemd-journald[237]: Runtime Journal (/run/log/journal/22e49f463c1d48dd882d30195e2e4fd5) is 5.9M, max 47.3M, 41.4M free. Sep 6 00:08:31.843704 systemd-modules-load[238]: Inserted module 'overlay' Sep 6 00:08:31.851824 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:08:31.854773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:08:31.857510 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:08:31.857531 kernel: Bridge firewalling registered Sep 6 00:08:31.857945 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 6 00:08:31.861902 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:08:31.863527 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:08:31.867560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:08:31.870762 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:08:31.873087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:08:31.879209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:08:31.880773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:08:31.889491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:08:31.890872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:08:31.906975 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 6 00:08:31.909140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:08:31.919103 dracut-cmdline[274]: dracut-dracut-053 Sep 6 00:08:31.921743 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:08:31.935049 systemd-resolved[275]: Positive Trust Anchors: Sep 6 00:08:31.935068 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:08:31.935103 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:08:31.940037 systemd-resolved[275]: Defaulting to hostname 'linux'. Sep 6 00:08:31.941908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:08:31.942824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:08:31.986763 kernel: SCSI subsystem initialized Sep 6 00:08:31.991754 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:08:31.998758 kernel: iscsi: registered transport (tcp) Sep 6 00:08:32.011825 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:08:32.011890 kernel: QLogic iSCSI HBA Driver Sep 6 00:08:32.056275 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 6 00:08:32.069916 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 6 00:08:32.086163 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:08:32.086227 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:08:32.086242 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 6 00:08:32.132761 kernel: raid6: neonx8 gen() 15701 MB/s Sep 6 00:08:32.149749 kernel: raid6: neonx4 gen() 15575 MB/s Sep 6 00:08:32.166747 kernel: raid6: neonx2 gen() 13262 MB/s Sep 6 00:08:32.183744 kernel: raid6: neonx1 gen() 10502 MB/s Sep 6 00:08:32.200756 kernel: raid6: int64x8 gen() 6965 MB/s Sep 6 00:08:32.217753 kernel: raid6: int64x4 gen() 7352 MB/s Sep 6 00:08:32.234752 kernel: raid6: int64x2 gen() 6131 MB/s Sep 6 00:08:32.251751 kernel: raid6: int64x1 gen() 5058 MB/s Sep 6 00:08:32.251774 kernel: raid6: using algorithm neonx8 gen() 15701 MB/s Sep 6 00:08:32.268762 kernel: raid6: .... xor() 12058 MB/s, rmw enabled Sep 6 00:08:32.268783 kernel: raid6: using neon recovery algorithm Sep 6 00:08:32.273779 kernel: xor: measuring software checksum speed Sep 6 00:08:32.273825 kernel: 8regs : 19788 MB/sec Sep 6 00:08:32.274816 kernel: 32regs : 19664 MB/sec Sep 6 00:08:32.274831 kernel: arm64_neon : 26848 MB/sec Sep 6 00:08:32.274841 kernel: xor: using function: arm64_neon (26848 MB/sec) Sep 6 00:08:32.323758 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 6 00:08:32.334847 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:08:32.343962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:08:32.355567 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 6 00:08:32.358759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:08:32.371143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 6 00:08:32.383029 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 6 00:08:32.411409 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:08:32.420955 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:08:32.464355 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:08:32.474618 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 6 00:08:32.487511 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 6 00:08:32.489295 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:08:32.490762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:08:32.492696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:08:32.505204 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 6 00:08:32.515391 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 6 00:08:32.515570 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:08:32.519212 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:08:32.522947 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:08:32.522996 kernel: GPT:9289727 != 19775487 Sep 6 00:08:32.523006 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:08:32.523015 kernel: GPT:9289727 != 19775487 Sep 6 00:08:32.523813 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:08:32.534502 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:08:32.536319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:08:32.536445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:08:32.539933 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:08:32.540807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:08:32.540953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:08:32.552237 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (520) Sep 6 00:08:32.552266 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (510) Sep 6 00:08:32.543292 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:08:32.552311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:08:32.561617 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 6 00:08:32.566022 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 6 00:08:32.567194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:08:32.578267 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 6 00:08:32.579259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 6 00:08:32.587079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 00:08:32.600965 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 6 00:08:32.602563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:08:32.606508 disk-uuid[549]: Primary Header is updated. Sep 6 00:08:32.606508 disk-uuid[549]: Secondary Entries is updated. Sep 6 00:08:32.606508 disk-uuid[549]: Secondary Header is updated. Sep 6 00:08:32.609769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:08:32.612753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:08:32.616757 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:08:32.626742 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:08:33.617769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:08:33.618468 disk-uuid[550]: The operation has completed successfully. Sep 6 00:08:33.637932 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:08:33.638033 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 6 00:08:33.660963 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 6 00:08:33.665128 sh[574]: Success Sep 6 00:08:33.676783 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:08:33.728337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 6 00:08:33.730019 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 6 00:08:33.732779 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 6 00:08:33.742548 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 6 00:08:33.742591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:08:33.742602 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 6 00:08:33.743261 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 6 00:08:33.743276 kernel: BTRFS info (device dm-0): using free space tree Sep 6 00:08:33.747199 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 6 00:08:33.748373 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 6 00:08:33.755958 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 6 00:08:33.757382 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 6 00:08:33.765328 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:08:33.765382 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:08:33.765393 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:08:33.766762 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:08:33.775316 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:08:33.776575 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:08:33.783198 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 6 00:08:33.788935 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 6 00:08:33.856166 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:08:33.856355 ignition[662]: Ignition 2.19.0 Sep 6 00:08:33.856362 ignition[662]: Stage: fetch-offline Sep 6 00:08:33.856395 ignition[662]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:08:33.856403 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:08:33.856552 ignition[662]: parsed url from cmdline: "" Sep 6 00:08:33.856555 ignition[662]: no config URL provided Sep 6 00:08:33.856560 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:08:33.856567 ignition[662]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:08:33.856590 ignition[662]: op(1): [started] loading QEMU firmware config module Sep 6 00:08:33.856595 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:08:33.863802 ignition[662]: op(1): [finished] loading QEMU firmware config module Sep 6 00:08:33.867952 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:08:33.888923 systemd-networkd[763]: lo: Link UP Sep 6 00:08:33.888933 systemd-networkd[763]: lo: Gained carrier Sep 6 00:08:33.890007 systemd-networkd[763]: Enumeration completed Sep 6 00:08:33.890086 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:08:33.890511 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:08:33.890515 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:08:33.891247 systemd-networkd[763]: eth0: Link UP Sep 6 00:08:33.891250 systemd-networkd[763]: eth0: Gained carrier Sep 6 00:08:33.891258 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:08:33.891685 systemd[1]: Reached target network.target - Network. Sep 6 00:08:33.915620 ignition[662]: parsing config with SHA512: 80afab1e5a162da52dda82186dad97625cce47adad6e9d18cf867a1282b125726bf5fa180a066e912dc61f93d14e1bbc30981e1747e84d0d5c273f79e3f383fc Sep 6 00:08:33.921479 unknown[662]: fetched base config from "system" Sep 6 00:08:33.921491 unknown[662]: fetched user config from "qemu" Sep 6 00:08:33.921915 ignition[662]: fetch-offline: fetch-offline passed Sep 6 00:08:33.923842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:08:33.921982 ignition[662]: Ignition finished successfully Sep 6 00:08:33.925149 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:08:33.927805 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:08:33.934964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 6 00:08:33.945622 ignition[767]: Ignition 2.19.0 Sep 6 00:08:33.945632 ignition[767]: Stage: kargs Sep 6 00:08:33.945816 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:08:33.945825 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:08:33.946714 ignition[767]: kargs: kargs passed Sep 6 00:08:33.950575 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 6 00:08:33.946842 ignition[767]: Ignition finished successfully Sep 6 00:08:33.965918 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 6 00:08:33.975862 ignition[776]: Ignition 2.19.0 Sep 6 00:08:33.975872 ignition[776]: Stage: disks Sep 6 00:08:33.976035 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:08:33.976045 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:08:33.976939 ignition[776]: disks: disks passed Sep 6 00:08:33.976984 ignition[776]: Ignition finished successfully Sep 6 00:08:33.981518 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 6 00:08:33.982597 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 6 00:08:33.983919 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 00:08:33.988083 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:08:33.989596 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:08:33.992616 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:08:34.010953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 6 00:08:34.021197 systemd-resolved[275]: Detected conflict on linux IN A 10.0.0.115 Sep 6 00:08:34.021210 systemd-resolved[275]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 6 00:08:34.027935 systemd-fsck[786]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 6 00:08:34.029022 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 6 00:08:34.034404 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 6 00:08:34.078776 kernel: EXT4-fs (vda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 6 00:08:34.079207 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 6 00:08:34.080331 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 6 00:08:34.091841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:08:34.093827 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 6 00:08:34.095017 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 6 00:08:34.095061 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:08:34.095083 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:08:34.102116 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (794) Sep 6 00:08:34.102147 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:08:34.102158 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:08:34.100586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 6 00:08:34.104715 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:08:34.105751 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:08:34.110896 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 6 00:08:34.112643 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:08:34.146239 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:08:34.150175 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:08:34.153992 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:08:34.156861 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:08:34.223028 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 6 00:08:34.233843 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 6 00:08:34.235319 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 6 00:08:34.240747 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:08:34.256002 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 6 00:08:34.257702 ignition[908]: INFO : Ignition 2.19.0 Sep 6 00:08:34.257702 ignition[908]: INFO : Stage: mount Sep 6 00:08:34.258999 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:08:34.258999 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:08:34.258999 ignition[908]: INFO : mount: mount passed Sep 6 00:08:34.258999 ignition[908]: INFO : Ignition finished successfully Sep 6 00:08:34.261174 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 6 00:08:34.273902 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 6 00:08:34.740801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 6 00:08:34.753928 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:08:34.758761 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (923) Sep 6 00:08:34.760991 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:08:34.761019 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:08:34.761030 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:08:34.762743 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:08:34.764166 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:08:34.780654 ignition[940]: INFO : Ignition 2.19.0 Sep 6 00:08:34.780654 ignition[940]: INFO : Stage: files Sep 6 00:08:34.782295 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:08:34.782295 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:08:34.782295 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:08:34.785335 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:08:34.785335 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:08:34.787842 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:08:34.789243 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:08:34.789243 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:08:34.788343 unknown[940]: wrote ssh authorized keys file for user: core Sep 6 00:08:34.792303 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 00:08:34.792303 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 6 00:08:34.857598 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:08:35.127653 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 00:08:35.127653 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:08:35.130809 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 6 00:08:35.179943 systemd-networkd[763]: eth0: Gained IPv6LL Sep 6 00:08:35.712092 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 6 00:08:36.260478 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 6 00:08:36.262364 ignition[940]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:08:36.279651 ignition[940]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:08:36.284601 ignition[940]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:08:36.285804 ignition[940]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:08:36.285804 ignition[940]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:08:36.285804 ignition[940]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:08:36.285804 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:08:36.285804 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:08:36.285804 ignition[940]: INFO : files: files passed Sep 6 00:08:36.285804 ignition[940]: INFO : Ignition finished successfully Sep 6 00:08:36.286845 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 6 00:08:36.300891 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 6 00:08:36.303153 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 6 00:08:36.304553 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:08:36.304683 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 6 00:08:36.311829 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Sep 6 00:08:36.315202 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:08:36.315202 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:08:36.318085 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:08:36.319080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:08:36.320540 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 6 00:08:36.328901 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 6 00:08:36.349008 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:08:36.349158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 6 00:08:36.351029 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 6 00:08:36.352444 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 6 00:08:36.353924 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 6 00:08:36.354750 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 6 00:08:36.370336 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:08:36.384993 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 6 00:08:36.393222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:08:36.395119 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:08:36.396162 systemd[1]: Stopped target timers.target - Timer Units. Sep 6 00:08:36.397587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:08:36.397715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:08:36.399810 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 6 00:08:36.401463 systemd[1]: Stopped target basic.target - Basic System. Sep 6 00:08:36.402804 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 6 00:08:36.404319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:08:36.405874 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 6 00:08:36.407409 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 6 00:08:36.408786 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:08:36.410340 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 6 00:08:36.411798 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 6 00:08:36.413211 systemd[1]: Stopped target swap.target - Swaps. Sep 6 00:08:36.414385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:08:36.414513 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:08:36.416355 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:08:36.417918 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:08:36.419422 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 6 00:08:36.422792 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:08:36.424759 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:08:36.424889 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 6 00:08:36.426970 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:08:36.427087 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:08:36.428786 systemd[1]: Stopped target paths.target - Path Units. Sep 6 00:08:36.430118 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:08:36.430790 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:08:36.432615 systemd[1]: Stopped target slices.target - Slice Units. Sep 6 00:08:36.434406 systemd[1]: Stopped target sockets.target - Socket Units. Sep 6 00:08:36.435580 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:08:36.435670 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:08:36.436827 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:08:36.436902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:08:36.438103 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:08:36.438217 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:08:36.439572 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:08:36.439670 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 6 00:08:36.451924 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 6 00:08:36.453338 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 6 00:08:36.454013 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:08:36.454141 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:08:36.455724 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:08:36.455833 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:08:36.461086 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:08:36.461960 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 6 00:08:36.464125 ignition[994]: INFO : Ignition 2.19.0 Sep 6 00:08:36.464125 ignition[994]: INFO : Stage: umount Sep 6 00:08:36.465433 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:08:36.465433 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:08:36.465433 ignition[994]: INFO : umount: umount passed Sep 6 00:08:36.465433 ignition[994]: INFO : Ignition finished successfully Sep 6 00:08:36.466622 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:08:36.467204 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:08:36.467299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 6 00:08:36.469157 systemd[1]: Stopped target network.target - Network. Sep 6 00:08:36.469935 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:08:36.470007 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 6 00:08:36.471697 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:08:36.471764 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 6 00:08:36.473082 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:08:36.473136 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 6 00:08:36.475875 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 6 00:08:36.475924 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 6 00:08:36.477374 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 6 00:08:36.478904 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 6 00:08:36.486167 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:08:36.486281 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 6 00:08:36.488670 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 6 00:08:36.488752 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:08:36.493877 systemd-networkd[763]: eth0: DHCPv6 lease lost Sep 6 00:08:36.495477 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:08:36.496342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 6 00:08:36.497504 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:08:36.497536 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:08:36.511870 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 6 00:08:36.512609 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:08:36.512672 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:08:36.514419 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:08:36.514460 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:08:36.515864 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:08:36.515901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 6 00:08:36.517947 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:08:36.528684 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:08:36.528808 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 6 00:08:36.530577 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:08:36.530689 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:08:36.532353 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:08:36.532430 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 6 00:08:36.534612 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:08:36.534663 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 6 00:08:36.536307 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:08:36.536338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:08:36.537556 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:08:36.537602 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:08:36.539686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:08:36.539728 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 6 00:08:36.541805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:08:36.541851 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:08:36.544115 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:08:36.544165 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 6 00:08:36.558942 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 6 00:08:36.559771 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:08:36.559831 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:08:36.561737 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 6 00:08:36.561786 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:08:36.563399 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:08:36.563437 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:08:36.565249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:08:36.565293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:08:36.567147 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:08:36.567231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 6 00:08:36.569081 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 6 00:08:36.572896 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 6 00:08:36.581619 systemd[1]: Switching root. Sep 6 00:08:36.605484 systemd-journald[237]: Journal stopped Sep 6 00:08:37.267200 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 6 00:08:37.267259 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:08:37.267275 kernel: SELinux: policy capability open_perms=1 Sep 6 00:08:37.267285 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:08:37.267297 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:08:37.267306 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:08:37.267315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:08:37.267324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:08:37.267337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:08:37.267347 kernel: audit: type=1403 audit(1757117316.744:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:08:37.267359 systemd[1]: Successfully loaded SELinux policy in 30.610ms. Sep 6 00:08:37.267374 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.799ms. Sep 6 00:08:37.267386 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:08:37.267396 systemd[1]: Detected virtualization kvm. Sep 6 00:08:37.267407 systemd[1]: Detected architecture arm64. Sep 6 00:08:37.267417 systemd[1]: Detected first boot. Sep 6 00:08:37.267428 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:08:37.267439 zram_generator::config[1039]: No configuration found. Sep 6 00:08:37.267453 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:08:37.267465 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:08:37.267475 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 6 00:08:37.267486 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:08:37.267497 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 6 00:08:37.267507 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 6 00:08:37.267517 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 6 00:08:37.267528 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 6 00:08:37.267539 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 6 00:08:37.267550 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 6 00:08:37.267561 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 6 00:08:37.267572 systemd[1]: Created slice user.slice - User and Session Slice. Sep 6 00:08:37.267583 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:08:37.267594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:08:37.267605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 6 00:08:37.267615 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 6 00:08:37.267626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 6 00:08:37.267637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:08:37.267649 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 6 00:08:37.267659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:08:37.267670 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 6 00:08:37.267680 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 6 00:08:37.267691 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 6 00:08:37.267701 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 6 00:08:37.267712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:08:37.267723 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:08:37.267859 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:08:37.267872 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:08:37.267883 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 6 00:08:37.267893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 6 00:08:37.267904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:08:37.267914 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:08:37.267925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:08:37.267935 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 6 00:08:37.267947 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 6 00:08:37.267960 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 6 00:08:37.267970 systemd[1]: Mounting media.mount - External Media Directory... Sep 6 00:08:37.267981 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 6 00:08:37.267991 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 6 00:08:37.268002 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 6 00:08:37.268013 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:08:37.268024 systemd[1]: Reached target machines.target - Containers. Sep 6 00:08:37.268034 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 6 00:08:37.268044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:08:37.268057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:08:37.268068 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 6 00:08:37.268078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:08:37.268095 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:08:37.268109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:08:37.268120 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 6 00:08:37.268130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:08:37.268141 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:08:37.268154 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:08:37.268164 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 6 00:08:37.268175 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:08:37.268187 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:08:37.268197 kernel: fuse: init (API version 7.39) Sep 6 00:08:37.268206 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:08:37.268217 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:08:37.268227 kernel: ACPI: bus type drm_connector registered Sep 6 00:08:37.268237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 00:08:37.268248 kernel: loop: module loaded Sep 6 00:08:37.268258 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 6 00:08:37.268269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:08:37.268299 systemd-journald[1106]: Collecting audit messages is disabled. Sep 6 00:08:37.268322 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:08:37.268333 systemd[1]: Stopped verity-setup.service. Sep 6 00:08:37.268344 systemd-journald[1106]: Journal started Sep 6 00:08:37.268366 systemd-journald[1106]: Runtime Journal (/run/log/journal/22e49f463c1d48dd882d30195e2e4fd5) is 5.9M, max 47.3M, 41.4M free. Sep 6 00:08:37.094715 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:08:37.109842 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 6 00:08:37.110202 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:08:37.271212 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:08:37.271847 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 6 00:08:37.272893 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 6 00:08:37.273827 systemd[1]: Mounted media.mount - External Media Directory. Sep 6 00:08:37.274645 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 6 00:08:37.275802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 6 00:08:37.276740 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 6 00:08:37.278758 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 6 00:08:37.279832 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:08:37.281015 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:08:37.281159 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 6 00:08:37.282402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:08:37.282523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:08:37.283671 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:08:37.283806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:08:37.284826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:08:37.286772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:08:37.287904 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:08:37.288023 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 6 00:08:37.289043 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:08:37.289173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:08:37.291753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:08:37.292807 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 00:08:37.294196 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 6 00:08:37.305631 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 00:08:37.319870 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 6 00:08:37.321788 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 6 00:08:37.322607 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:08:37.322636 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:08:37.324453 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 6 00:08:37.326482 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 6 00:08:37.328431 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 6 00:08:37.329393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:08:37.330834 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 6 00:08:37.334913 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 6 00:08:37.335915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:08:37.337367 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 6 00:08:37.338602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:08:37.342974 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:08:37.348942 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 6 00:08:37.349182 systemd-journald[1106]: Time spent on flushing to /var/log/journal/22e49f463c1d48dd882d30195e2e4fd5 is 28.612ms for 858 entries. Sep 6 00:08:37.349182 systemd-journald[1106]: System Journal (/var/log/journal/22e49f463c1d48dd882d30195e2e4fd5) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:08:37.383860 systemd-journald[1106]: Received client request to flush runtime journal. Sep 6 00:08:37.383913 kernel: loop0: detected capacity change from 0 to 114432 Sep 6 00:08:37.383927 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:08:37.352046 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 00:08:37.356770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:08:37.358100 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 6 00:08:37.359797 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 6 00:08:37.361820 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 6 00:08:37.364190 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 6 00:08:37.372537 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 6 00:08:37.382960 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 6 00:08:37.387043 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 6 00:08:37.388740 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 6 00:08:37.392872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:08:37.401629 kernel: loop1: detected capacity change from 0 to 114328 Sep 6 00:08:37.399470 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:08:37.400333 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Sep 6 00:08:37.400364 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Sep 6 00:08:37.408780 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 6 00:08:37.410259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:08:37.420145 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 6 00:08:37.421321 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:08:37.426752 kernel: loop2: detected capacity change from 0 to 211168 Sep 6 00:08:37.442776 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 6 00:08:37.448759 kernel: loop3: detected capacity change from 0 to 114432 Sep 6 00:08:37.451002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:08:37.453756 kernel: loop4: detected capacity change from 0 to 114328 Sep 6 00:08:37.458754 kernel: loop5: detected capacity change from 0 to 211168 Sep 6 00:08:37.461793 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 6 00:08:37.462204 (sd-merge)[1175]: Merged extensions into '/usr'. Sep 6 00:08:37.467025 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Sep 6 00:08:37.467043 systemd[1]: Reloading... Sep 6 00:08:37.468967 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 6 00:08:37.468986 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 6 00:08:37.523906 zram_generator::config[1203]: No configuration found. Sep 6 00:08:37.603828 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:08:37.629273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:08:37.668862 systemd[1]: Reloading finished in 201 ms. Sep 6 00:08:37.702774 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 6 00:08:37.703904 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 6 00:08:37.705212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:08:37.720879 systemd[1]: Starting ensure-sysext.service... Sep 6 00:08:37.722391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:08:37.728228 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Sep 6 00:08:37.728241 systemd[1]: Reloading... Sep 6 00:08:37.737944 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:08:37.738164 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 6 00:08:37.738716 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:08:37.738920 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 6 00:08:37.738971 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 6 00:08:37.741023 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:08:37.741034 systemd-tmpfiles[1240]: Skipping /boot Sep 6 00:08:37.747721 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:08:37.747742 systemd-tmpfiles[1240]: Skipping /boot Sep 6 00:08:37.772780 zram_generator::config[1267]: No configuration found. Sep 6 00:08:37.855830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:08:37.895990 systemd[1]: Reloading finished in 167 ms. Sep 6 00:08:37.913850 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 6 00:08:37.922304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:08:37.930185 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:08:37.932621 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 6 00:08:37.934890 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 6 00:08:37.938006 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:08:37.941066 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:08:37.946140 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 6 00:08:37.949791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:08:37.953028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:08:37.960058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:08:37.962191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:08:37.963445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:08:37.965775 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 6 00:08:37.969668 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:08:37.969842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:08:37.971162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:08:37.971291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:08:37.973130 augenrules[1327]: No rules Sep 6 00:08:37.974286 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:08:37.996154 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 6 00:08:37.997570 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Sep 6 00:08:37.998330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:08:37.998502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:08:38.005650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:08:38.015036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:08:38.021021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:08:38.024024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:08:38.025555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:08:38.027166 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 6 00:08:38.029805 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 6 00:08:38.030569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:08:38.031353 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:08:38.033218 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 6 00:08:38.034670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:08:38.034855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:08:38.036204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:08:38.036330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:08:38.037986 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:08:38.038122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:08:38.048446 systemd[1]: Finished ensure-sysext.service. Sep 6 00:08:38.049683 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 6 00:08:38.057250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:08:38.063801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1347) Sep 6 00:08:38.065970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:08:38.072274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:08:38.078957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:08:38.081907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:08:38.082847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:08:38.086034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:08:38.090705 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 6 00:08:38.091670 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:08:38.091941 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 6 00:08:38.093235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:08:38.093378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:08:38.096383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:08:38.096526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:08:38.099442 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:08:38.100259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:08:38.104273 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:08:38.104450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:08:38.107928 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 6 00:08:38.112815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:08:38.112878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:08:38.120341 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 00:08:38.133941 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 6 00:08:38.149800 systemd-networkd[1380]: lo: Link UP Sep 6 00:08:38.149808 systemd-networkd[1380]: lo: Gained carrier Sep 6 00:08:38.150527 systemd-networkd[1380]: Enumeration completed Sep 6 00:08:38.150643 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:08:38.151193 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:08:38.151203 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:08:38.151927 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:08:38.151962 systemd-networkd[1380]: eth0: Link UP Sep 6 00:08:38.151965 systemd-networkd[1380]: eth0: Gained carrier Sep 6 00:08:38.151973 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:08:38.159490 systemd-resolved[1308]: Positive Trust Anchors: Sep 6 00:08:38.159503 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:08:38.159535 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:08:38.160934 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 6 00:08:38.162008 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 6 00:08:38.163175 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 6 00:08:38.165586 systemd[1]: Reached target time-set.target - System Time Set. Sep 6 00:08:38.171633 systemd-resolved[1308]: Defaulting to hostname 'linux'. Sep 6 00:08:38.171803 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:08:38.172345 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Sep 6 00:08:38.173372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:08:38.173983 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:08:38.174031 systemd-timesyncd[1382]: Initial clock synchronization to Sat 2025-09-06 00:08:38.248918 UTC. Sep 6 00:08:38.174692 systemd[1]: Reached target network.target - Network. Sep 6 00:08:38.175674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:08:38.214023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:08:38.226102 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 6 00:08:38.238953 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 6 00:08:38.247103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:08:38.249572 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:08:38.290305 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 6 00:08:38.291538 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:08:38.292477 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:08:38.293431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 6 00:08:38.294432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 6 00:08:38.295579 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 6 00:08:38.296552 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 6 00:08:38.297597 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 6 00:08:38.298579 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:08:38.298616 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:08:38.299338 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:08:38.300853 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 6 00:08:38.303188 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 6 00:08:38.317801 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 6 00:08:38.319928 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 6 00:08:38.321254 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 6 00:08:38.322221 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:08:38.322981 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:08:38.323680 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:08:38.323707 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:08:38.324807 systemd[1]: Starting containerd.service - containerd container runtime... Sep 6 00:08:38.326656 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 6 00:08:38.328838 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:08:38.330163 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 6 00:08:38.335143 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 6 00:08:38.335838 jq[1411]: false Sep 6 00:08:38.335941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 6 00:08:38.338401 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 6 00:08:38.340491 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 6 00:08:38.344912 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 6 00:08:38.348555 extend-filesystems[1412]: Found loop3 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found loop4 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found loop5 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda1 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda2 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda3 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found usr Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda4 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda6 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda7 Sep 6 00:08:38.348555 extend-filesystems[1412]: Found vda9 Sep 6 00:08:38.348555 extend-filesystems[1412]: Checking size of /dev/vda9 Sep 6 00:08:38.385510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1360) Sep 6 00:08:38.366072 dbus-daemon[1410]: [system] SELinux support is enabled Sep 6 00:08:38.389973 extend-filesystems[1412]: Resized partition /dev/vda9 Sep 6 00:08:38.349520 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 6 00:08:38.354691 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 6 00:08:38.357019 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:08:38.357482 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:08:38.358549 systemd[1]: Starting update-engine.service - Update Engine... Sep 6 00:08:38.362809 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 6 00:08:38.392393 jq[1429]: true Sep 6 00:08:38.365400 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 6 00:08:38.370048 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 6 00:08:38.377513 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:08:38.379889 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 6 00:08:38.380225 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:08:38.380361 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 6 00:08:38.389272 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:08:38.389436 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 6 00:08:38.398530 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Sep 6 00:08:38.402902 jq[1437]: true Sep 6 00:08:38.406785 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:08:38.411057 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 6 00:08:38.412307 tar[1435]: linux-arm64/LICENSE Sep 6 00:08:38.412307 tar[1435]: linux-arm64/helm Sep 6 00:08:38.419905 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:08:38.419939 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 6 00:08:38.421494 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:08:38.421517 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 6 00:08:38.426756 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:08:38.437123 update_engine[1427]: I20250906 00:08:38.436641 1427 main.cc:92] Flatcar Update Engine starting Sep 6 00:08:38.438603 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:08:38.438603 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:08:38.438603 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:08:38.443836 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Sep 6 00:08:38.445371 update_engine[1427]: I20250906 00:08:38.441645 1427 update_check_scheduler.cc:74] Next update check in 7m24s Sep 6 00:08:38.440588 systemd[1]: Started update-engine.service - Update Engine. Sep 6 00:08:38.442281 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:08:38.443775 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 6 00:08:38.451687 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:08:38.453996 systemd-logind[1425]: New seat seat0. Sep 6 00:08:38.457130 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:08:38.455914 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 6 00:08:38.457719 systemd[1]: Started systemd-logind.service - User Login Management. Sep 6 00:08:38.462792 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 6 00:08:38.465602 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 6 00:08:38.509946 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:08:38.565713 containerd[1446]: time="2025-09-06T00:08:38.565616640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 6 00:08:38.594102 containerd[1446]: time="2025-09-06T00:08:38.593997360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:08:38.595603 containerd[1446]: time="2025-09-06T00:08:38.595532160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:08:38.595603 containerd[1446]: time="2025-09-06T00:08:38.595573840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:08:38.595603 containerd[1446]: time="2025-09-06T00:08:38.595595560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:08:38.595789 containerd[1446]: time="2025-09-06T00:08:38.595772360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 6 00:08:38.595814 containerd[1446]: time="2025-09-06T00:08:38.595794040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 6 00:08:38.595868 containerd[1446]: time="2025-09-06T00:08:38.595849880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:08:38.595889 containerd[1446]: time="2025-09-06T00:08:38.595869360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596047 containerd[1446]: time="2025-09-06T00:08:38.596027880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596077 containerd[1446]: time="2025-09-06T00:08:38.596047400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596077 containerd[1446]: time="2025-09-06T00:08:38.596061160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596077 containerd[1446]: time="2025-09-06T00:08:38.596070600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596270 containerd[1446]: time="2025-09-06T00:08:38.596151480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596369 containerd[1446]: time="2025-09-06T00:08:38.596350440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596475 containerd[1446]: time="2025-09-06T00:08:38.596456720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:08:38.596475 containerd[1446]: time="2025-09-06T00:08:38.596473360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:08:38.596580 containerd[1446]: time="2025-09-06T00:08:38.596547120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:08:38.596607 containerd[1446]: time="2025-09-06T00:08:38.596590360Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:08:38.599666 containerd[1446]: time="2025-09-06T00:08:38.599633800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:08:38.599867 containerd[1446]: time="2025-09-06T00:08:38.599697600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:08:38.599867 containerd[1446]: time="2025-09-06T00:08:38.599716720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 6 00:08:38.599867 containerd[1446]: time="2025-09-06T00:08:38.599753240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 6 00:08:38.599867 containerd[1446]: time="2025-09-06T00:08:38.599776640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:08:38.599946 containerd[1446]: time="2025-09-06T00:08:38.599929480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600201760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600354480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600372040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600385840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600401400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600414760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600427920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600442280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600456320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600470160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600483760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600495880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600515800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.600969 containerd[1446]: time="2025-09-06T00:08:38.600531280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600543560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600561600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600574080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600587640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600600680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600613640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600627040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600647520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600660160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600671840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600683560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600698880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600720000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600760000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.601259 containerd[1446]: time="2025-09-06T00:08:38.600773680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:08:38.602262 containerd[1446]: time="2025-09-06T00:08:38.602233920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:08:38.602908 containerd[1446]: time="2025-09-06T00:08:38.602613520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 6 00:08:38.602908 containerd[1446]: time="2025-09-06T00:08:38.602637080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:08:38.602908 containerd[1446]: time="2025-09-06T00:08:38.602651000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 6 00:08:38.602908 containerd[1446]: time="2025-09-06T00:08:38.602661400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.602908 containerd[1446]: time="2025-09-06T00:08:38.602676360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 6 00:08:38.602908 containerd[1446]: time="2025-09-06T00:08:38.602688080Z" level=info msg="NRI interface is disabled by configuration." Sep 6 00:08:38.602908 containerd[1446]: time="2025-09-06T00:08:38.602698240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:08:38.603592 containerd[1446]: time="2025-09-06T00:08:38.603527120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:08:38.603960 containerd[1446]: time="2025-09-06T00:08:38.603845680Z" level=info msg="Connect containerd service" Sep 6 00:08:38.603960 containerd[1446]: time="2025-09-06T00:08:38.603898440Z" level=info msg="using legacy CRI server" Sep 6 00:08:38.603960 containerd[1446]: time="2025-09-06T00:08:38.603906880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 6 00:08:38.605333 containerd[1446]: time="2025-09-06T00:08:38.605206360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:08:38.606120 containerd[1446]: time="2025-09-06T00:08:38.606067960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:08:38.606291 containerd[1446]: time="2025-09-06T00:08:38.606262520Z" level=info msg="Start subscribing containerd event" Sep 6 00:08:38.606342 containerd[1446]: time="2025-09-06T00:08:38.606306640Z" level=info msg="Start recovering state" Sep 6 00:08:38.606425 containerd[1446]: time="2025-09-06T00:08:38.606387320Z" level=info msg="Start event monitor" Sep 6 00:08:38.606425 containerd[1446]: time="2025-09-06T00:08:38.606406280Z" level=info msg="Start snapshots syncer" Sep 6 00:08:38.606602 containerd[1446]: time="2025-09-06T00:08:38.606415040Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:08:38.606630 containerd[1446]: time="2025-09-06T00:08:38.606602840Z" level=info msg="Start streaming server" Sep 6 00:08:38.607016 containerd[1446]: time="2025-09-06T00:08:38.606991960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:08:38.607050 containerd[1446]: time="2025-09-06T00:08:38.607037080Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:08:38.607113 containerd[1446]: time="2025-09-06T00:08:38.607097160Z" level=info msg="containerd successfully booted in 0.044195s" Sep 6 00:08:38.607854 systemd[1]: Started containerd.service - containerd container runtime. Sep 6 00:08:38.798450 tar[1435]: linux-arm64/README.md Sep 6 00:08:38.811335 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 6 00:08:39.268602 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:08:39.288263 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 6 00:08:39.299128 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 6 00:08:39.304680 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:08:39.304914 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 6 00:08:39.307397 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 6 00:08:39.320189 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 6 00:08:39.329132 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 6 00:08:39.331094 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 6 00:08:39.332115 systemd[1]: Reached target getty.target - Login Prompts. Sep 6 00:08:39.468059 systemd-networkd[1380]: eth0: Gained IPv6LL Sep 6 00:08:39.470903 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 6 00:08:39.472408 systemd[1]: Reached target network-online.target - Network is Online. Sep 6 00:08:39.486026 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 6 00:08:39.488438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:08:39.490475 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 6 00:08:39.505692 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 6 00:08:39.505920 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 6 00:08:39.507943 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 6 00:08:39.509138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 6 00:08:40.040705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:08:40.042196 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 6 00:08:40.044036 systemd[1]: Startup finished in 531ms (kernel) + 5.065s (initrd) + 3.330s (userspace) = 8.927s. Sep 6 00:08:40.045329 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:08:40.396093 kubelet[1523]: E0906 00:08:40.395972 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:08:40.398482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:08:40.398637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:08:44.689980 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 6 00:08:44.691282 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:45700.service - OpenSSH per-connection server daemon (10.0.0.1:45700). Sep 6 00:08:44.736437 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 45700 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:08:44.738059 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:08:44.745958 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 6 00:08:44.755047 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 6 00:08:44.757160 systemd-logind[1425]: New session 1 of user core. Sep 6 00:08:44.765175 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 6 00:08:44.767629 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 6 00:08:44.774627 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:08:44.853524 systemd[1540]: Queued start job for default target default.target. Sep 6 00:08:44.863854 systemd[1540]: Created slice app.slice - User Application Slice. Sep 6 00:08:44.863891 systemd[1540]: Reached target paths.target - Paths. Sep 6 00:08:44.863904 systemd[1540]: Reached target timers.target - Timers. Sep 6 00:08:44.865234 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 6 00:08:44.875723 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 6 00:08:44.875920 systemd[1540]: Reached target sockets.target - Sockets. Sep 6 00:08:44.875941 systemd[1540]: Reached target basic.target - Basic System. Sep 6 00:08:44.875978 systemd[1540]: Reached target default.target - Main User Target. Sep 6 00:08:44.876005 systemd[1540]: Startup finished in 95ms. Sep 6 00:08:44.876201 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 6 00:08:44.877483 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 6 00:08:44.951209 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:45704.service - OpenSSH per-connection server daemon (10.0.0.1:45704). Sep 6 00:08:44.980373 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 45704 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:08:44.981727 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:08:44.985829 systemd-logind[1425]: New session 2 of user core. Sep 6 00:08:44.993943 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 6 00:08:45.047543 sshd[1551]: pam_unix(sshd:session): session closed for user core Sep 6 00:08:45.057185 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:45704.service: Deactivated successfully. Sep 6 00:08:45.058821 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:08:45.060112 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:08:45.073109 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:45708.service - OpenSSH per-connection server daemon (10.0.0.1:45708). Sep 6 00:08:45.073934 systemd-logind[1425]: Removed session 2. Sep 6 00:08:45.102933 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 45708 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:08:45.104557 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:08:45.108097 systemd-logind[1425]: New session 3 of user core. Sep 6 00:08:45.121909 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 6 00:08:45.170153 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 6 00:08:45.184225 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:45708.service: Deactivated successfully. Sep 6 00:08:45.185682 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:08:45.186913 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:08:45.188717 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:45712.service - OpenSSH per-connection server daemon (10.0.0.1:45712). Sep 6 00:08:45.190124 systemd-logind[1425]: Removed session 3. Sep 6 00:08:45.222641 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 45712 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:08:45.224046 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:08:45.227583 systemd-logind[1425]: New session 4 of user core. Sep 6 00:08:45.243934 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 6 00:08:45.296690 sshd[1565]: pam_unix(sshd:session): session closed for user core Sep 6 00:08:45.312284 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:45712.service: Deactivated successfully. Sep 6 00:08:45.313755 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:08:45.315918 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:08:45.326052 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:45714.service - OpenSSH per-connection server daemon (10.0.0.1:45714). Sep 6 00:08:45.327005 systemd-logind[1425]: Removed session 4. Sep 6 00:08:45.354724 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 45714 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:08:45.356164 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:08:45.360124 systemd-logind[1425]: New session 5 of user core. Sep 6 00:08:45.376982 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 6 00:08:45.433593 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 00:08:45.433910 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:08:45.447637 sudo[1575]: pam_unix(sudo:session): session closed for user root Sep 6 00:08:45.449419 sshd[1572]: pam_unix(sshd:session): session closed for user core Sep 6 00:08:45.461244 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:45714.service: Deactivated successfully. Sep 6 00:08:45.462826 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:08:45.465828 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:08:45.475034 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:45720.service - OpenSSH per-connection server daemon (10.0.0.1:45720). Sep 6 00:08:45.476272 systemd-logind[1425]: Removed session 5. Sep 6 00:08:45.504256 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 45720 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:08:45.505580 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:08:45.509095 systemd-logind[1425]: New session 6 of user core. Sep 6 00:08:45.520933 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 6 00:08:45.571424 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 00:08:45.571695 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:08:45.574650 sudo[1584]: pam_unix(sudo:session): session closed for user root Sep 6 00:08:45.579136 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 00:08:45.579405 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:08:45.597063 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 6 00:08:45.598061 auditctl[1587]: No rules Sep 6 00:08:45.598848 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 00:08:45.599834 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 6 00:08:45.601547 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:08:45.623510 augenrules[1605]: No rules Sep 6 00:08:45.625789 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:08:45.627019 sudo[1583]: pam_unix(sudo:session): session closed for user root Sep 6 00:08:45.628473 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 6 00:08:45.634940 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:45720.service: Deactivated successfully. Sep 6 00:08:45.636302 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:08:45.637397 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:08:45.638541 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:45734.service - OpenSSH per-connection server daemon (10.0.0.1:45734). Sep 6 00:08:45.639168 systemd-logind[1425]: Removed session 6. Sep 6 00:08:45.670043 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 45734 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:08:45.671130 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:08:45.674382 systemd-logind[1425]: New session 7 of user core. Sep 6 00:08:45.687885 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 6 00:08:45.738008 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:08:45.738284 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:08:45.993997 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 6 00:08:45.994115 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 6 00:08:46.201972 dockerd[1634]: time="2025-09-06T00:08:46.201238004Z" level=info msg="Starting up" Sep 6 00:08:46.332009 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport688536432-merged.mount: Deactivated successfully. Sep 6 00:08:46.347724 dockerd[1634]: time="2025-09-06T00:08:46.347673128Z" level=info msg="Loading containers: start." Sep 6 00:08:46.423767 kernel: Initializing XFRM netlink socket Sep 6 00:08:46.481277 systemd-networkd[1380]: docker0: Link UP Sep 6 00:08:46.500755 dockerd[1634]: time="2025-09-06T00:08:46.500717222Z" level=info msg="Loading containers: done." Sep 6 00:08:46.513572 dockerd[1634]: time="2025-09-06T00:08:46.513521559Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:08:46.513716 dockerd[1634]: time="2025-09-06T00:08:46.513610888Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 6 00:08:46.513716 dockerd[1634]: time="2025-09-06T00:08:46.513704312Z" level=info msg="Daemon has completed initialization" Sep 6 00:08:46.541954 dockerd[1634]: time="2025-09-06T00:08:46.541785369Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:08:46.541949 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 6 00:08:47.039448 containerd[1446]: time="2025-09-06T00:08:47.039197262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 6 00:08:47.329996 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3048006089-merged.mount: Deactivated successfully. Sep 6 00:08:47.660221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730166407.mount: Deactivated successfully. Sep 6 00:08:48.688819 containerd[1446]: time="2025-09-06T00:08:48.688771280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:48.689439 containerd[1446]: time="2025-09-06T00:08:48.689398249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 6 00:08:48.690389 containerd[1446]: time="2025-09-06T00:08:48.690339825Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:48.693312 containerd[1446]: time="2025-09-06T00:08:48.693279839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:48.694753 containerd[1446]: time="2025-09-06T00:08:48.694460449Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.655221056s" Sep 6 00:08:48.694753 containerd[1446]: time="2025-09-06T00:08:48.694492018Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 6 00:08:48.695854 containerd[1446]: time="2025-09-06T00:08:48.695675236Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 6 00:08:49.968749 containerd[1446]: time="2025-09-06T00:08:49.968679562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:49.969946 containerd[1446]: time="2025-09-06T00:08:49.969911243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 6 00:08:49.970939 containerd[1446]: time="2025-09-06T00:08:49.970893469Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:49.973812 containerd[1446]: time="2025-09-06T00:08:49.973763155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:49.974905 containerd[1446]: time="2025-09-06T00:08:49.974868805Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.279159316s" Sep 6 00:08:49.974905 containerd[1446]: time="2025-09-06T00:08:49.974902769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 6 00:08:49.975478 containerd[1446]: time="2025-09-06T00:08:49.975311819Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 6 00:08:50.648969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:08:50.657908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:08:50.763412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:08:50.767212 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:08:50.812149 kubelet[1848]: E0906 00:08:50.812090 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:08:50.815672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:08:50.815842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:08:51.299785 containerd[1446]: time="2025-09-06T00:08:51.299432466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:51.300101 containerd[1446]: time="2025-09-06T00:08:51.299941829Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 6 00:08:51.301205 containerd[1446]: time="2025-09-06T00:08:51.301158010Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:51.304422 containerd[1446]: time="2025-09-06T00:08:51.304053607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:51.305267 containerd[1446]: time="2025-09-06T00:08:51.305238609Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.329896401s" Sep 6 00:08:51.305305 containerd[1446]: time="2025-09-06T00:08:51.305273475Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 6 00:08:51.306180 containerd[1446]: time="2025-09-06T00:08:51.305770455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 6 00:08:52.259360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004177939.mount: Deactivated successfully. Sep 6 00:08:52.535391 containerd[1446]: time="2025-09-06T00:08:52.535275848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:52.536484 containerd[1446]: time="2025-09-06T00:08:52.536276104Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 6 00:08:52.537370 containerd[1446]: time="2025-09-06T00:08:52.537211212Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:52.540126 containerd[1446]: time="2025-09-06T00:08:52.540091861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:52.540702 containerd[1446]: time="2025-09-06T00:08:52.540672102Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.234872314s" Sep 6 00:08:52.540769 containerd[1446]: time="2025-09-06T00:08:52.540704475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 6 00:08:52.541287 containerd[1446]: time="2025-09-06T00:08:52.541160230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 6 00:08:53.113492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138486232.mount: Deactivated successfully. Sep 6 00:08:54.047531 containerd[1446]: time="2025-09-06T00:08:54.047468306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:54.048694 containerd[1446]: time="2025-09-06T00:08:54.048661979Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 6 00:08:54.049900 containerd[1446]: time="2025-09-06T00:08:54.049443210Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:54.053109 containerd[1446]: time="2025-09-06T00:08:54.053078940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:54.055069 containerd[1446]: time="2025-09-06T00:08:54.055036582Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.513842621s" Sep 6 00:08:54.055121 containerd[1446]: time="2025-09-06T00:08:54.055076233Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 6 00:08:54.055638 containerd[1446]: time="2025-09-06T00:08:54.055618600Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:08:54.505424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776278218.mount: Deactivated successfully. Sep 6 00:08:54.510318 containerd[1446]: time="2025-09-06T00:08:54.509676074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:54.511020 containerd[1446]: time="2025-09-06T00:08:54.510991262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 6 00:08:54.511887 containerd[1446]: time="2025-09-06T00:08:54.511862887Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:54.514296 containerd[1446]: time="2025-09-06T00:08:54.514261809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:54.515507 containerd[1446]: time="2025-09-06T00:08:54.515478111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.830274ms" Sep 6 00:08:54.515630 containerd[1446]: time="2025-09-06T00:08:54.515611961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:08:54.516113 containerd[1446]: time="2025-09-06T00:08:54.516078072Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 6 00:08:55.011122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459047799.mount: Deactivated successfully. Sep 6 00:08:56.607989 containerd[1446]: time="2025-09-06T00:08:56.607935109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:56.609212 containerd[1446]: time="2025-09-06T00:08:56.609167466Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 6 00:08:56.610093 containerd[1446]: time="2025-09-06T00:08:56.610043477Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:56.613019 containerd[1446]: time="2025-09-06T00:08:56.612992540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:08:56.614674 containerd[1446]: time="2025-09-06T00:08:56.614373361Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.098270941s" Sep 6 00:08:56.614674 containerd[1446]: time="2025-09-06T00:08:56.614405112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 6 00:09:00.145448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:09:00.155987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:09:00.180446 systemd[1]: Reloading requested from client PID 2011 ('systemctl') (unit session-7.scope)... Sep 6 00:09:00.180468 systemd[1]: Reloading... Sep 6 00:09:00.256773 zram_generator::config[2048]: No configuration found. Sep 6 00:09:00.349655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:09:00.409956 systemd[1]: Reloading finished in 229 ms. Sep 6 00:09:00.457159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:09:00.460124 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:09:00.460319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:09:00.462008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:09:00.570480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:09:00.574778 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:09:00.605963 kubelet[2097]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:09:00.605963 kubelet[2097]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:09:00.605963 kubelet[2097]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:09:00.606278 kubelet[2097]: I0906 00:09:00.606025 2097 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:09:01.462761 kubelet[2097]: I0906 00:09:01.462236 2097 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:09:01.462761 kubelet[2097]: I0906 00:09:01.462267 2097 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:09:01.462761 kubelet[2097]: I0906 00:09:01.462474 2097 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:09:01.479771 kubelet[2097]: E0906 00:09:01.479328 2097 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 00:09:01.480038 kubelet[2097]: I0906 00:09:01.480002 2097 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:09:01.487026 kubelet[2097]: E0906 00:09:01.487001 2097 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:09:01.487122 kubelet[2097]: I0906 00:09:01.487108 2097 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:09:01.489607 kubelet[2097]: I0906 00:09:01.489584 2097 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:09:01.489914 kubelet[2097]: I0906 00:09:01.489893 2097 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:09:01.490044 kubelet[2097]: I0906 00:09:01.489914 2097 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:09:01.490119 kubelet[2097]: I0906 00:09:01.490108 2097 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:09:01.490119 kubelet[2097]: I0906 00:09:01.490116 2097 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:09:01.490304 kubelet[2097]: I0906 00:09:01.490282 2097 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:09:01.492698 kubelet[2097]: I0906 00:09:01.492681 2097 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:09:01.492750 kubelet[2097]: I0906 00:09:01.492704 2097 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:09:01.492750 kubelet[2097]: I0906 00:09:01.492741 2097 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:09:01.494160 kubelet[2097]: I0906 00:09:01.493786 2097 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:09:01.495133 kubelet[2097]: I0906 00:09:01.495115 2097 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:09:01.496109 kubelet[2097]: I0906 00:09:01.496084 2097 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:09:01.497367 kubelet[2097]: E0906 00:09:01.497186 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:09:01.497486 kubelet[2097]: W0906 00:09:01.497472 2097 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:09:01.498415 kubelet[2097]: E0906 00:09:01.498291 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:09:01.499977 kubelet[2097]: I0906 00:09:01.499961 2097 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:09:01.500104 kubelet[2097]: I0906 00:09:01.500092 2097 server.go:1289] "Started kubelet" Sep 6 00:09:01.500215 kubelet[2097]: I0906 00:09:01.500195 2097 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:09:01.501121 kubelet[2097]: I0906 00:09:01.501099 2097 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:09:01.501297 kubelet[2097]: I0906 00:09:01.501265 2097 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:09:01.505766 kubelet[2097]: I0906 00:09:01.505746 2097 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:09:01.506151 kubelet[2097]: I0906 00:09:01.506127 2097 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:09:01.506663 kubelet[2097]: E0906 00:09:01.505262 2097 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186288e998c9fe27 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:09:01.500046887 +0000 UTC m=+0.921613995,LastTimestamp:2025-09-06 00:09:01.500046887 +0000 UTC m=+0.921613995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:09:01.507757 kubelet[2097]: E0906 00:09:01.507003 2097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:09:01.507757 kubelet[2097]: E0906 00:09:01.507271 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Sep 6 00:09:01.507757 kubelet[2097]: I0906 00:09:01.507585 2097 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:09:01.507757 kubelet[2097]: I0906 00:09:01.507651 2097 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:09:01.508532 kubelet[2097]: E0906 00:09:01.508501 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:09:01.508797 kubelet[2097]: I0906 00:09:01.508705 2097 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:09:01.510348 kubelet[2097]: I0906 00:09:01.510283 2097 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:09:01.510540 kubelet[2097]: I0906 00:09:01.510518 2097 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:09:01.511330 kubelet[2097]: E0906 00:09:01.511285 2097 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:09:01.511562 kubelet[2097]: I0906 00:09:01.511537 2097 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:09:01.511562 kubelet[2097]: I0906 00:09:01.511550 2097 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:09:01.522073 kubelet[2097]: I0906 00:09:01.522025 2097 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:09:01.523555 kubelet[2097]: I0906 00:09:01.523231 2097 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:09:01.523555 kubelet[2097]: I0906 00:09:01.523252 2097 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:09:01.523555 kubelet[2097]: I0906 00:09:01.523273 2097 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:09:01.523555 kubelet[2097]: I0906 00:09:01.523280 2097 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:09:01.523555 kubelet[2097]: E0906 00:09:01.523324 2097 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:09:01.526745 kubelet[2097]: E0906 00:09:01.526598 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:09:01.527046 kubelet[2097]: I0906 00:09:01.527030 2097 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:09:01.527046 kubelet[2097]: I0906 00:09:01.527048 2097 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:09:01.527202 kubelet[2097]: I0906 00:09:01.527066 2097 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:09:01.607114 kubelet[2097]: E0906 00:09:01.607078 2097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:09:01.622444 kubelet[2097]: I0906 00:09:01.622411 2097 policy_none.go:49] "None policy: Start" Sep 6 00:09:01.622444 kubelet[2097]: I0906 00:09:01.622440 2097 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:09:01.622444 kubelet[2097]: I0906 00:09:01.622452 2097 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:09:01.624507 kubelet[2097]: E0906 00:09:01.624452 2097 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:09:01.628933 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 6 00:09:01.644013 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 6 00:09:01.646789 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 6 00:09:01.657638 kubelet[2097]: E0906 00:09:01.657597 2097 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:09:01.657890 kubelet[2097]: I0906 00:09:01.657825 2097 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:09:01.657890 kubelet[2097]: I0906 00:09:01.657844 2097 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:09:01.658105 kubelet[2097]: I0906 00:09:01.658082 2097 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:09:01.659952 kubelet[2097]: E0906 00:09:01.659897 2097 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:09:01.659952 kubelet[2097]: E0906 00:09:01.659935 2097 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:09:01.707842 kubelet[2097]: E0906 00:09:01.707789 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Sep 6 00:09:01.760051 kubelet[2097]: I0906 00:09:01.759962 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:09:01.760409 kubelet[2097]: E0906 00:09:01.760382 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 6 00:09:01.910644 kubelet[2097]: I0906 00:09:01.910573 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d950b5297febcab95056b9967affda0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d950b5297febcab95056b9967affda0b\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:01.910644 kubelet[2097]: I0906 00:09:01.910625 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d950b5297febcab95056b9967affda0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d950b5297febcab95056b9967affda0b\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:01.910644 kubelet[2097]: I0906 00:09:01.910645 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d950b5297febcab95056b9967affda0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d950b5297febcab95056b9967affda0b\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:01.961705 kubelet[2097]: I0906 00:09:01.961685 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:09:01.962079 kubelet[2097]: E0906 00:09:01.962048 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 6 00:09:01.987386 systemd[1]: Created slice kubepods-burstable-podd950b5297febcab95056b9967affda0b.slice - libcontainer container kubepods-burstable-podd950b5297febcab95056b9967affda0b.slice. Sep 6 00:09:02.002865 kubelet[2097]: E0906 00:09:02.002840 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:02.005109 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 6 00:09:02.011481 kubelet[2097]: I0906 00:09:02.011188 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:02.011481 kubelet[2097]: I0906 00:09:02.011218 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:02.011481 kubelet[2097]: I0906 00:09:02.011235 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:02.011481 kubelet[2097]: I0906 00:09:02.011252 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:02.011481 kubelet[2097]: I0906 00:09:02.011284 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:02.011632 kubelet[2097]: I0906 00:09:02.011322 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:02.013721 kubelet[2097]: E0906 00:09:02.013686 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:02.015840 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 6 00:09:02.017078 kubelet[2097]: E0906 00:09:02.017057 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:02.108662 kubelet[2097]: E0906 00:09:02.108589 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Sep 6 00:09:02.303549 kubelet[2097]: E0906 00:09:02.303441 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:02.304204 containerd[1446]: time="2025-09-06T00:09:02.304150504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d950b5297febcab95056b9967affda0b,Namespace:kube-system,Attempt:0,}" Sep 6 00:09:02.314670 kubelet[2097]: E0906 00:09:02.314632 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:02.316410 containerd[1446]: time="2025-09-06T00:09:02.316243738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 6 00:09:02.317540 kubelet[2097]: E0906 00:09:02.317512 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:02.317869 containerd[1446]: time="2025-09-06T00:09:02.317842955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 6 00:09:02.363192 kubelet[2097]: I0906 00:09:02.363146 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:09:02.363553 kubelet[2097]: E0906 00:09:02.363512 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 6 00:09:02.387021 kubelet[2097]: E0906 00:09:02.386975 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:09:02.799662 kubelet[2097]: E0906 00:09:02.799612 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:09:02.825694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707585634.mount: Deactivated successfully. Sep 6 00:09:02.831965 containerd[1446]: time="2025-09-06T00:09:02.831920407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:09:02.833670 containerd[1446]: time="2025-09-06T00:09:02.833445032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:09:02.834167 containerd[1446]: time="2025-09-06T00:09:02.834130050Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:09:02.834895 containerd[1446]: time="2025-09-06T00:09:02.834873895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:09:02.835404 containerd[1446]: time="2025-09-06T00:09:02.835364549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 6 00:09:02.836112 containerd[1446]: time="2025-09-06T00:09:02.836082102Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:09:02.837264 containerd[1446]: time="2025-09-06T00:09:02.836521333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:09:02.841184 containerd[1446]: time="2025-09-06T00:09:02.841150432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.248651ms" Sep 6 00:09:02.841756 containerd[1446]: time="2025-09-06T00:09:02.841700232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:09:02.845527 containerd[1446]: time="2025-09-06T00:09:02.845487443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.258105ms" Sep 6 00:09:02.849029 containerd[1446]: time="2025-09-06T00:09:02.849000775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.689208ms" Sep 6 00:09:02.897589 kubelet[2097]: E0906 00:09:02.897547 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:09:02.910966 kubelet[2097]: E0906 00:09:02.910894 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Sep 6 00:09:02.960636 containerd[1446]: time="2025-09-06T00:09:02.960386827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:02.960636 containerd[1446]: time="2025-09-06T00:09:02.960444132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:02.960636 containerd[1446]: time="2025-09-06T00:09:02.960459899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:02.960636 containerd[1446]: time="2025-09-06T00:09:02.960546416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:02.961665 containerd[1446]: time="2025-09-06T00:09:02.961422478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:02.961665 containerd[1446]: time="2025-09-06T00:09:02.961493829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:02.961665 containerd[1446]: time="2025-09-06T00:09:02.961530045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:02.961665 containerd[1446]: time="2025-09-06T00:09:02.961622926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:02.965806 containerd[1446]: time="2025-09-06T00:09:02.965095120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:02.965806 containerd[1446]: time="2025-09-06T00:09:02.965152545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:02.965806 containerd[1446]: time="2025-09-06T00:09:02.965179036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:02.965806 containerd[1446]: time="2025-09-06T00:09:02.965271997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:02.983210 kubelet[2097]: E0906 00:09:02.983170 2097 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:09:02.988911 systemd[1]: Started cri-containerd-2aa245943de76005735b077a7097aa0ec2dc908704f346173b7bfdfa1d562b18.scope - libcontainer container 2aa245943de76005735b077a7097aa0ec2dc908704f346173b7bfdfa1d562b18. Sep 6 00:09:02.990057 systemd[1]: Started cri-containerd-7d229dbcfa8f64060a1f96dc92ac4f4e151133ed88d8d562d278282a61bffb0a.scope - libcontainer container 7d229dbcfa8f64060a1f96dc92ac4f4e151133ed88d8d562d278282a61bffb0a. Sep 6 00:09:02.992331 systemd[1]: Started cri-containerd-95e008e9d4c1ac42ff7162a7ea06b0f554db14b38273df00081b8662c8e6038b.scope - libcontainer container 95e008e9d4c1ac42ff7162a7ea06b0f554db14b38273df00081b8662c8e6038b. Sep 6 00:09:03.024037 containerd[1446]: time="2025-09-06T00:09:03.023984773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aa245943de76005735b077a7097aa0ec2dc908704f346173b7bfdfa1d562b18\"" Sep 6 00:09:03.025208 kubelet[2097]: E0906 00:09:03.025178 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:03.027537 containerd[1446]: time="2025-09-06T00:09:03.027489590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d950b5297febcab95056b9967affda0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"95e008e9d4c1ac42ff7162a7ea06b0f554db14b38273df00081b8662c8e6038b\"" Sep 6 00:09:03.030361 kubelet[2097]: E0906 00:09:03.030335 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:03.034206 containerd[1446]: time="2025-09-06T00:09:03.034091709Z" level=info msg="CreateContainer within sandbox \"2aa245943de76005735b077a7097aa0ec2dc908704f346173b7bfdfa1d562b18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:09:03.034302 containerd[1446]: time="2025-09-06T00:09:03.034217237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d229dbcfa8f64060a1f96dc92ac4f4e151133ed88d8d562d278282a61bffb0a\"" Sep 6 00:09:03.035930 kubelet[2097]: E0906 00:09:03.035712 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:03.036292 containerd[1446]: time="2025-09-06T00:09:03.036199514Z" level=info msg="CreateContainer within sandbox \"95e008e9d4c1ac42ff7162a7ea06b0f554db14b38273df00081b8662c8e6038b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:09:03.039624 containerd[1446]: time="2025-09-06T00:09:03.039592248Z" level=info msg="CreateContainer within sandbox \"7d229dbcfa8f64060a1f96dc92ac4f4e151133ed88d8d562d278282a61bffb0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:09:03.050603 containerd[1446]: time="2025-09-06T00:09:03.050496009Z" level=info msg="CreateContainer within sandbox \"2aa245943de76005735b077a7097aa0ec2dc908704f346173b7bfdfa1d562b18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"592fdf998f231a805ba149f120e2b4b9a407b9477a15454302f9fb56350c452e\"" Sep 6 00:09:03.052766 containerd[1446]: time="2025-09-06T00:09:03.051651250Z" level=info msg="StartContainer for \"592fdf998f231a805ba149f120e2b4b9a407b9477a15454302f9fb56350c452e\"" Sep 6 00:09:03.057543 containerd[1446]: time="2025-09-06T00:09:03.057503243Z" level=info msg="CreateContainer within sandbox \"95e008e9d4c1ac42ff7162a7ea06b0f554db14b38273df00081b8662c8e6038b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0daea6f54029d6d6681c95d8438c95a5e90634a8510b500c9843dd3666632f89\"" Sep 6 00:09:03.058103 containerd[1446]: time="2025-09-06T00:09:03.058077942Z" level=info msg="StartContainer for \"0daea6f54029d6d6681c95d8438c95a5e90634a8510b500c9843dd3666632f89\"" Sep 6 00:09:03.058838 containerd[1446]: time="2025-09-06T00:09:03.058751279Z" level=info msg="CreateContainer within sandbox \"7d229dbcfa8f64060a1f96dc92ac4f4e151133ed88d8d562d278282a61bffb0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b24e063a12a84b787d175ffd6ded09b6e3d22722102992846c53fcff7e0affd3\"" Sep 6 00:09:03.059216 containerd[1446]: time="2025-09-06T00:09:03.059157834Z" level=info msg="StartContainer for \"b24e063a12a84b787d175ffd6ded09b6e3d22722102992846c53fcff7e0affd3\"" Sep 6 00:09:03.073927 systemd[1]: Started cri-containerd-592fdf998f231a805ba149f120e2b4b9a407b9477a15454302f9fb56350c452e.scope - libcontainer container 592fdf998f231a805ba149f120e2b4b9a407b9477a15454302f9fb56350c452e. Sep 6 00:09:03.080467 systemd[1]: Started cri-containerd-0daea6f54029d6d6681c95d8438c95a5e90634a8510b500c9843dd3666632f89.scope - libcontainer container 0daea6f54029d6d6681c95d8438c95a5e90634a8510b500c9843dd3666632f89. Sep 6 00:09:03.086197 systemd[1]: Started cri-containerd-b24e063a12a84b787d175ffd6ded09b6e3d22722102992846c53fcff7e0affd3.scope - libcontainer container b24e063a12a84b787d175ffd6ded09b6e3d22722102992846c53fcff7e0affd3. Sep 6 00:09:03.110039 containerd[1446]: time="2025-09-06T00:09:03.109992712Z" level=info msg="StartContainer for \"592fdf998f231a805ba149f120e2b4b9a407b9477a15454302f9fb56350c452e\" returns successfully" Sep 6 00:09:03.122091 containerd[1446]: time="2025-09-06T00:09:03.122011017Z" level=info msg="StartContainer for \"0daea6f54029d6d6681c95d8438c95a5e90634a8510b500c9843dd3666632f89\" returns successfully" Sep 6 00:09:03.125713 containerd[1446]: time="2025-09-06T00:09:03.124968706Z" level=info msg="StartContainer for \"b24e063a12a84b787d175ffd6ded09b6e3d22722102992846c53fcff7e0affd3\" returns successfully" Sep 6 00:09:03.164977 kubelet[2097]: I0906 00:09:03.164871 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:09:03.165279 kubelet[2097]: E0906 00:09:03.165220 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 6 00:09:03.532617 kubelet[2097]: E0906 00:09:03.532395 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:03.532617 kubelet[2097]: E0906 00:09:03.532523 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:03.536012 kubelet[2097]: E0906 00:09:03.535988 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:03.536100 kubelet[2097]: E0906 00:09:03.536089 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:03.537468 kubelet[2097]: E0906 00:09:03.537446 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:03.537551 kubelet[2097]: E0906 00:09:03.537535 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:04.539969 kubelet[2097]: E0906 00:09:04.539696 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:04.539969 kubelet[2097]: E0906 00:09:04.539841 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:04.539969 kubelet[2097]: E0906 00:09:04.539891 2097 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:09:04.540318 kubelet[2097]: E0906 00:09:04.540020 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:04.772955 kubelet[2097]: I0906 00:09:04.772179 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:09:04.802761 kubelet[2097]: E0906 00:09:04.802339 2097 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:09:04.888850 kubelet[2097]: I0906 00:09:04.888806 2097 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:09:04.888850 kubelet[2097]: E0906 00:09:04.888847 2097 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 00:09:04.907713 kubelet[2097]: I0906 00:09:04.907666 2097 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:04.912569 kubelet[2097]: E0906 00:09:04.912516 2097 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:04.912569 kubelet[2097]: I0906 00:09:04.912564 2097 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:04.914191 kubelet[2097]: E0906 00:09:04.914147 2097 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:04.914191 kubelet[2097]: I0906 00:09:04.914187 2097 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:04.916609 kubelet[2097]: E0906 00:09:04.915893 2097 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:05.497674 kubelet[2097]: I0906 00:09:05.497634 2097 apiserver.go:52] "Watching apiserver" Sep 6 00:09:05.508191 kubelet[2097]: I0906 00:09:05.508164 2097 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:09:05.668946 kubelet[2097]: I0906 00:09:05.668767 2097 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:05.670669 kubelet[2097]: E0906 00:09:05.670643 2097 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:05.670830 kubelet[2097]: E0906 00:09:05.670815 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:07.015590 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-7.scope)... Sep 6 00:09:07.015606 systemd[1]: Reloading... Sep 6 00:09:07.089836 zram_generator::config[2426]: No configuration found. Sep 6 00:09:07.176018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:09:07.247955 systemd[1]: Reloading finished in 232 ms. Sep 6 00:09:07.284094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:09:07.297669 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:09:07.297914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:09:07.297967 systemd[1]: kubelet.service: Consumed 1.267s CPU time, 133.7M memory peak, 0B memory swap peak. Sep 6 00:09:07.308055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:09:07.406874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:09:07.410923 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:09:07.446871 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:09:07.446871 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:09:07.446871 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:09:07.446871 kubelet[2465]: I0906 00:09:07.446727 2465 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:09:07.451631 kubelet[2465]: I0906 00:09:07.451580 2465 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:09:07.451631 kubelet[2465]: I0906 00:09:07.451606 2465 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:09:07.451797 kubelet[2465]: I0906 00:09:07.451782 2465 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:09:07.452878 kubelet[2465]: I0906 00:09:07.452861 2465 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 6 00:09:07.455088 kubelet[2465]: I0906 00:09:07.455064 2465 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:09:07.459807 kubelet[2465]: E0906 00:09:07.459775 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:09:07.459807 kubelet[2465]: I0906 00:09:07.459803 2465 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:09:07.462008 kubelet[2465]: I0906 00:09:07.461979 2465 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:09:07.462182 kubelet[2465]: I0906 00:09:07.462164 2465 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:09:07.462318 kubelet[2465]: I0906 00:09:07.462183 2465 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:09:07.462318 kubelet[2465]: I0906 00:09:07.462316 2465 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:09:07.462421 kubelet[2465]: I0906 00:09:07.462324 2465 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:09:07.462421 kubelet[2465]: I0906 00:09:07.462373 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:09:07.462519 kubelet[2465]: I0906 00:09:07.462509 2465 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:09:07.462550 kubelet[2465]: I0906 00:09:07.462526 2465 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:09:07.462550 kubelet[2465]: I0906 00:09:07.462548 2465 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:09:07.462602 kubelet[2465]: I0906 00:09:07.462560 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:09:07.463757 kubelet[2465]: I0906 00:09:07.463652 2465 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:09:07.464217 kubelet[2465]: I0906 00:09:07.464188 2465 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:09:07.466109 kubelet[2465]: I0906 00:09:07.466083 2465 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:09:07.466165 kubelet[2465]: I0906 00:09:07.466128 2465 server.go:1289] "Started kubelet" Sep 6 00:09:07.466760 kubelet[2465]: I0906 00:09:07.466222 2465 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:09:07.466760 kubelet[2465]: I0906 00:09:07.466309 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:09:07.466760 kubelet[2465]: I0906 00:09:07.466560 2465 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:09:07.467298 kubelet[2465]: I0906 00:09:07.467279 2465 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:09:07.468088 kubelet[2465]: I0906 00:09:07.468058 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:09:07.468542 kubelet[2465]: I0906 00:09:07.468519 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:09:07.469582 kubelet[2465]: E0906 00:09:07.469552 2465 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:09:07.469582 kubelet[2465]: I0906 00:09:07.469584 2465 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:09:07.469768 kubelet[2465]: I0906 00:09:07.469712 2465 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:09:07.469843 kubelet[2465]: I0906 00:09:07.469826 2465 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:09:07.478043 kubelet[2465]: I0906 00:09:07.477999 2465 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:09:07.478123 kubelet[2465]: I0906 00:09:07.478100 2465 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:09:07.489739 kubelet[2465]: I0906 00:09:07.488642 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:09:07.489739 kubelet[2465]: I0906 00:09:07.489467 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:09:07.489739 kubelet[2465]: I0906 00:09:07.489480 2465 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:09:07.489739 kubelet[2465]: I0906 00:09:07.489495 2465 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:09:07.489739 kubelet[2465]: I0906 00:09:07.489502 2465 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:09:07.489739 kubelet[2465]: E0906 00:09:07.489536 2465 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:09:07.494809 kubelet[2465]: E0906 00:09:07.494226 2465 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:09:07.494809 kubelet[2465]: I0906 00:09:07.494571 2465 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:09:07.526804 kubelet[2465]: I0906 00:09:07.526776 2465 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:09:07.526804 kubelet[2465]: I0906 00:09:07.526796 2465 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:09:07.526804 kubelet[2465]: I0906 00:09:07.526814 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:09:07.526977 kubelet[2465]: I0906 00:09:07.526928 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:09:07.526977 kubelet[2465]: I0906 00:09:07.526937 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:09:07.526977 kubelet[2465]: I0906 00:09:07.526953 2465 policy_none.go:49] "None policy: Start" Sep 6 00:09:07.526977 kubelet[2465]: I0906 00:09:07.526961 2465 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:09:07.526977 kubelet[2465]: I0906 00:09:07.526969 2465 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:09:07.527075 kubelet[2465]: I0906 00:09:07.527042 2465 state_mem.go:75] "Updated machine memory state" Sep 6 00:09:07.530400 kubelet[2465]: E0906 00:09:07.530375 2465 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:09:07.530680 kubelet[2465]: I0906 00:09:07.530536 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:09:07.530680 kubelet[2465]: I0906 00:09:07.530554 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:09:07.530680 kubelet[2465]: I0906 00:09:07.530681 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:09:07.532243 kubelet[2465]: E0906 00:09:07.532205 2465 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:09:07.591374 kubelet[2465]: I0906 00:09:07.591247 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:07.591704 kubelet[2465]: I0906 00:09:07.591552 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:07.591797 kubelet[2465]: I0906 00:09:07.591750 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:07.634126 kubelet[2465]: I0906 00:09:07.634101 2465 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:09:07.640786 kubelet[2465]: I0906 00:09:07.639928 2465 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 6 00:09:07.640786 kubelet[2465]: I0906 00:09:07.640007 2465 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:09:07.671183 kubelet[2465]: I0906 00:09:07.670940 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:07.671183 kubelet[2465]: I0906 00:09:07.670983 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:07.671183 kubelet[2465]: I0906 00:09:07.671003 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:07.671183 kubelet[2465]: I0906 00:09:07.671019 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d950b5297febcab95056b9967affda0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d950b5297febcab95056b9967affda0b\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:07.671183 kubelet[2465]: I0906 00:09:07.671034 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d950b5297febcab95056b9967affda0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d950b5297febcab95056b9967affda0b\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:07.671412 kubelet[2465]: I0906 00:09:07.671048 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d950b5297febcab95056b9967affda0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d950b5297febcab95056b9967affda0b\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:07.671412 kubelet[2465]: I0906 00:09:07.671062 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:07.671412 kubelet[2465]: I0906 00:09:07.671075 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:07.671412 kubelet[2465]: I0906 00:09:07.671091 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:07.897722 kubelet[2465]: E0906 00:09:07.897524 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:07.897722 kubelet[2465]: E0906 00:09:07.897571 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:07.897722 kubelet[2465]: E0906 00:09:07.897604 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:08.463644 kubelet[2465]: I0906 00:09:08.463587 2465 apiserver.go:52] "Watching apiserver" Sep 6 00:09:08.469861 kubelet[2465]: I0906 00:09:08.469825 2465 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:09:08.505065 kubelet[2465]: I0906 00:09:08.504649 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:08.505065 kubelet[2465]: I0906 00:09:08.504804 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:08.506445 kubelet[2465]: I0906 00:09:08.506415 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:08.512623 kubelet[2465]: E0906 00:09:08.512593 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 00:09:08.512942 kubelet[2465]: E0906 00:09:08.512804 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:09:08.513079 kubelet[2465]: E0906 00:09:08.512832 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:09:08.513337 kubelet[2465]: E0906 00:09:08.513238 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:08.513337 kubelet[2465]: E0906 00:09:08.513097 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:08.513515 kubelet[2465]: E0906 00:09:08.513472 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:08.530777 kubelet[2465]: I0906 00:09:08.529236 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.529223011 podStartE2EDuration="1.529223011s" podCreationTimestamp="2025-09-06 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:09:08.525599902 +0000 UTC m=+1.110423660" watchObservedRunningTime="2025-09-06 00:09:08.529223011 +0000 UTC m=+1.114046769" Sep 6 00:09:08.534362 kubelet[2465]: I0906 00:09:08.534312 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.534300205 podStartE2EDuration="1.534300205s" podCreationTimestamp="2025-09-06 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:09:08.534043675 +0000 UTC m=+1.118867393" watchObservedRunningTime="2025-09-06 00:09:08.534300205 +0000 UTC m=+1.119123963" Sep 6 00:09:09.506207 kubelet[2465]: E0906 00:09:09.506125 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:09.506207 kubelet[2465]: E0906 00:09:09.506166 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:09.506752 kubelet[2465]: E0906 00:09:09.506243 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:10.507537 kubelet[2465]: E0906 00:09:10.507481 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:13.470535 kubelet[2465]: I0906 00:09:13.470503 2465 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:09:13.471210 containerd[1446]: time="2025-09-06T00:09:13.471033772Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:09:13.471513 kubelet[2465]: I0906 00:09:13.471225 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:09:14.040782 kubelet[2465]: E0906 00:09:14.040654 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:14.056390 kubelet[2465]: I0906 00:09:14.056289 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.056273316 podStartE2EDuration="7.056273316s" podCreationTimestamp="2025-09-06 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:09:08.541483011 +0000 UTC m=+1.126306769" watchObservedRunningTime="2025-09-06 00:09:14.056273316 +0000 UTC m=+6.641097074" Sep 6 00:09:14.149064 systemd[1]: Created slice kubepods-besteffort-pod0d7630ed_24a6_46d4_b3f4_3efd254e363d.slice - libcontainer container kubepods-besteffort-pod0d7630ed_24a6_46d4_b3f4_3efd254e363d.slice. Sep 6 00:09:14.215324 kubelet[2465]: I0906 00:09:14.215233 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d7630ed-24a6-46d4-b3f4-3efd254e363d-kube-proxy\") pod \"kube-proxy-scdlt\" (UID: \"0d7630ed-24a6-46d4-b3f4-3efd254e363d\") " pod="kube-system/kube-proxy-scdlt" Sep 6 00:09:14.215324 kubelet[2465]: I0906 00:09:14.215307 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d7630ed-24a6-46d4-b3f4-3efd254e363d-xtables-lock\") pod \"kube-proxy-scdlt\" (UID: \"0d7630ed-24a6-46d4-b3f4-3efd254e363d\") " pod="kube-system/kube-proxy-scdlt" Sep 6 00:09:14.215526 kubelet[2465]: I0906 00:09:14.215349 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d7630ed-24a6-46d4-b3f4-3efd254e363d-lib-modules\") pod \"kube-proxy-scdlt\" (UID: \"0d7630ed-24a6-46d4-b3f4-3efd254e363d\") " pod="kube-system/kube-proxy-scdlt" Sep 6 00:09:14.215526 kubelet[2465]: I0906 00:09:14.215386 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7mnn\" (UniqueName: \"kubernetes.io/projected/0d7630ed-24a6-46d4-b3f4-3efd254e363d-kube-api-access-v7mnn\") pod \"kube-proxy-scdlt\" (UID: \"0d7630ed-24a6-46d4-b3f4-3efd254e363d\") " pod="kube-system/kube-proxy-scdlt" Sep 6 00:09:14.323384 kubelet[2465]: E0906 00:09:14.323279 2465 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:09:14.323384 kubelet[2465]: E0906 00:09:14.323312 2465 projected.go:194] Error preparing data for projected volume kube-api-access-v7mnn for pod kube-system/kube-proxy-scdlt: configmap "kube-root-ca.crt" not found Sep 6 00:09:14.323384 kubelet[2465]: E0906 00:09:14.323375 2465 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d7630ed-24a6-46d4-b3f4-3efd254e363d-kube-api-access-v7mnn podName:0d7630ed-24a6-46d4-b3f4-3efd254e363d nodeName:}" failed. No retries permitted until 2025-09-06 00:09:14.82335382 +0000 UTC m=+7.408177578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7mnn" (UniqueName: "kubernetes.io/projected/0d7630ed-24a6-46d4-b3f4-3efd254e363d-kube-api-access-v7mnn") pod "kube-proxy-scdlt" (UID: "0d7630ed-24a6-46d4-b3f4-3efd254e363d") : configmap "kube-root-ca.crt" not found Sep 6 00:09:14.513612 kubelet[2465]: E0906 00:09:14.513550 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:14.705707 systemd[1]: Created slice kubepods-besteffort-pod18d3e0cb_5bab_4221_ac59_1533b5bd825a.slice - libcontainer container kubepods-besteffort-pod18d3e0cb_5bab_4221_ac59_1533b5bd825a.slice. Sep 6 00:09:14.718316 kubelet[2465]: I0906 00:09:14.718264 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997wt\" (UniqueName: \"kubernetes.io/projected/18d3e0cb-5bab-4221-ac59-1533b5bd825a-kube-api-access-997wt\") pod \"tigera-operator-755d956888-2qkpl\" (UID: \"18d3e0cb-5bab-4221-ac59-1533b5bd825a\") " pod="tigera-operator/tigera-operator-755d956888-2qkpl" Sep 6 00:09:14.718316 kubelet[2465]: I0906 00:09:14.718302 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18d3e0cb-5bab-4221-ac59-1533b5bd825a-var-lib-calico\") pod \"tigera-operator-755d956888-2qkpl\" (UID: \"18d3e0cb-5bab-4221-ac59-1533b5bd825a\") " pod="tigera-operator/tigera-operator-755d956888-2qkpl" Sep 6 00:09:15.008513 containerd[1446]: time="2025-09-06T00:09:15.008412075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-2qkpl,Uid:18d3e0cb-5bab-4221-ac59-1533b5bd825a,Namespace:tigera-operator,Attempt:0,}" Sep 6 00:09:15.027006 containerd[1446]: time="2025-09-06T00:09:15.026938622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:15.027140 containerd[1446]: time="2025-09-06T00:09:15.026999387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:15.027140 containerd[1446]: time="2025-09-06T00:09:15.027023909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:15.027140 containerd[1446]: time="2025-09-06T00:09:15.027093915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:15.045877 systemd[1]: Started cri-containerd-09e5d576cea58dae80e38d09de8939be1e0d609e9bfc5453a2df719e7f775aae.scope - libcontainer container 09e5d576cea58dae80e38d09de8939be1e0d609e9bfc5453a2df719e7f775aae. Sep 6 00:09:15.057675 kubelet[2465]: E0906 00:09:15.057635 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:15.059942 containerd[1446]: time="2025-09-06T00:09:15.058914212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scdlt,Uid:0d7630ed-24a6-46d4-b3f4-3efd254e363d,Namespace:kube-system,Attempt:0,}" Sep 6 00:09:15.072269 containerd[1446]: time="2025-09-06T00:09:15.072232164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-2qkpl,Uid:18d3e0cb-5bab-4221-ac59-1533b5bd825a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09e5d576cea58dae80e38d09de8939be1e0d609e9bfc5453a2df719e7f775aae\"" Sep 6 00:09:15.074045 containerd[1446]: time="2025-09-06T00:09:15.074022714Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 6 00:09:15.081262 containerd[1446]: time="2025-09-06T00:09:15.081158430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:15.081262 containerd[1446]: time="2025-09-06T00:09:15.081217755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:15.081480 containerd[1446]: time="2025-09-06T00:09:15.081274919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:15.081480 containerd[1446]: time="2025-09-06T00:09:15.081365847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:15.104882 systemd[1]: Started cri-containerd-d4aa8812cee5014807034fbd7c6a70ace93cf2ccdc643e30e00ef73e3468c9ed.scope - libcontainer container d4aa8812cee5014807034fbd7c6a70ace93cf2ccdc643e30e00ef73e3468c9ed. Sep 6 00:09:15.121787 containerd[1446]: time="2025-09-06T00:09:15.121653731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scdlt,Uid:0d7630ed-24a6-46d4-b3f4-3efd254e363d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4aa8812cee5014807034fbd7c6a70ace93cf2ccdc643e30e00ef73e3468c9ed\"" Sep 6 00:09:15.124561 kubelet[2465]: E0906 00:09:15.124523 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:15.130706 containerd[1446]: time="2025-09-06T00:09:15.130672085Z" level=info msg="CreateContainer within sandbox \"d4aa8812cee5014807034fbd7c6a70ace93cf2ccdc643e30e00ef73e3468c9ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:09:15.147343 containerd[1446]: time="2025-09-06T00:09:15.147304474Z" level=info msg="CreateContainer within sandbox \"d4aa8812cee5014807034fbd7c6a70ace93cf2ccdc643e30e00ef73e3468c9ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19233fb41c2858b9d8534c0e098652b8351cc55daba04f79c6b69a77522dfff0\"" Sep 6 00:09:15.147875 containerd[1446]: time="2025-09-06T00:09:15.147837078Z" level=info msg="StartContainer for \"19233fb41c2858b9d8534c0e098652b8351cc55daba04f79c6b69a77522dfff0\"" Sep 6 00:09:15.176883 systemd[1]: Started cri-containerd-19233fb41c2858b9d8534c0e098652b8351cc55daba04f79c6b69a77522dfff0.scope - libcontainer container 19233fb41c2858b9d8534c0e098652b8351cc55daba04f79c6b69a77522dfff0. Sep 6 00:09:15.198855 containerd[1446]: time="2025-09-06T00:09:15.198782012Z" level=info msg="StartContainer for \"19233fb41c2858b9d8534c0e098652b8351cc55daba04f79c6b69a77522dfff0\" returns successfully" Sep 6 00:09:15.516428 kubelet[2465]: E0906 00:09:15.516301 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:15.525673 kubelet[2465]: I0906 00:09:15.525622 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-scdlt" podStartSLOduration=1.525551141 podStartE2EDuration="1.525551141s" podCreationTimestamp="2025-09-06 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:09:15.525307641 +0000 UTC m=+8.110131399" watchObservedRunningTime="2025-09-06 00:09:15.525551141 +0000 UTC m=+8.110374899" Sep 6 00:09:16.232592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988236713.mount: Deactivated successfully. Sep 6 00:09:18.069080 kubelet[2465]: E0906 00:09:18.068252 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:18.391875 containerd[1446]: time="2025-09-06T00:09:18.391770357Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:18.392702 containerd[1446]: time="2025-09-06T00:09:18.392659020Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 6 00:09:18.393701 containerd[1446]: time="2025-09-06T00:09:18.393662731Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:18.396149 containerd[1446]: time="2025-09-06T00:09:18.396105504Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:18.396835 containerd[1446]: time="2025-09-06T00:09:18.396796313Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.322739516s" Sep 6 00:09:18.396887 containerd[1446]: time="2025-09-06T00:09:18.396836276Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 6 00:09:18.402617 containerd[1446]: time="2025-09-06T00:09:18.402518198Z" level=info msg="CreateContainer within sandbox \"09e5d576cea58dae80e38d09de8939be1e0d609e9bfc5453a2df719e7f775aae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 6 00:09:18.430069 containerd[1446]: time="2025-09-06T00:09:18.430033188Z" level=info msg="CreateContainer within sandbox \"09e5d576cea58dae80e38d09de8939be1e0d609e9bfc5453a2df719e7f775aae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"266b051cd56b12f0d9bc17252efdcb13d80b4dea0ff27e5384a1a0dd9ecd4e48\"" Sep 6 00:09:18.430454 containerd[1446]: time="2025-09-06T00:09:18.430427016Z" level=info msg="StartContainer for \"266b051cd56b12f0d9bc17252efdcb13d80b4dea0ff27e5384a1a0dd9ecd4e48\"" Sep 6 00:09:18.456937 systemd[1]: Started cri-containerd-266b051cd56b12f0d9bc17252efdcb13d80b4dea0ff27e5384a1a0dd9ecd4e48.scope - libcontainer container 266b051cd56b12f0d9bc17252efdcb13d80b4dea0ff27e5384a1a0dd9ecd4e48. Sep 6 00:09:18.479229 containerd[1446]: time="2025-09-06T00:09:18.479190591Z" level=info msg="StartContainer for \"266b051cd56b12f0d9bc17252efdcb13d80b4dea0ff27e5384a1a0dd9ecd4e48\" returns successfully" Sep 6 00:09:18.526009 kubelet[2465]: E0906 00:09:18.525867 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:19.413786 kubelet[2465]: E0906 00:09:19.413684 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:19.417604 kubelet[2465]: I0906 00:09:19.417449 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-2qkpl" podStartSLOduration=2.091304052 podStartE2EDuration="5.417437452s" podCreationTimestamp="2025-09-06 00:09:14 +0000 UTC" firstStartedPulling="2025-09-06 00:09:15.073790294 +0000 UTC m=+7.658614052" lastFinishedPulling="2025-09-06 00:09:18.399923734 +0000 UTC m=+10.984747452" observedRunningTime="2025-09-06 00:09:18.534045918 +0000 UTC m=+11.118869636" watchObservedRunningTime="2025-09-06 00:09:19.417437452 +0000 UTC m=+12.002261170" Sep 6 00:09:19.529794 kubelet[2465]: E0906 00:09:19.527081 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:23.964953 sudo[1616]: pam_unix(sudo:session): session closed for user root Sep 6 00:09:23.967437 sshd[1613]: pam_unix(sshd:session): session closed for user core Sep 6 00:09:23.971415 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:45734.service: Deactivated successfully. Sep 6 00:09:23.973243 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:09:23.973454 systemd[1]: session-7.scope: Consumed 5.326s CPU time, 155.7M memory peak, 0B memory swap peak. Sep 6 00:09:23.973868 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:09:23.974946 systemd-logind[1425]: Removed session 7. Sep 6 00:09:24.117389 update_engine[1427]: I20250906 00:09:24.117317 1427 update_attempter.cc:509] Updating boot flags... Sep 6 00:09:24.206800 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2878) Sep 6 00:09:24.286760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2880) Sep 6 00:09:28.414626 systemd[1]: Created slice kubepods-besteffort-pod2dc783f7_0232_45b6_bd34_15f92db17229.slice - libcontainer container kubepods-besteffort-pod2dc783f7_0232_45b6_bd34_15f92db17229.slice. Sep 6 00:09:28.426459 kubelet[2465]: I0906 00:09:28.426399 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phnl\" (UniqueName: \"kubernetes.io/projected/2dc783f7-0232-45b6-bd34-15f92db17229-kube-api-access-5phnl\") pod \"calico-typha-6d6989645c-7dxsd\" (UID: \"2dc783f7-0232-45b6-bd34-15f92db17229\") " pod="calico-system/calico-typha-6d6989645c-7dxsd" Sep 6 00:09:28.426459 kubelet[2465]: I0906 00:09:28.426447 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc783f7-0232-45b6-bd34-15f92db17229-tigera-ca-bundle\") pod \"calico-typha-6d6989645c-7dxsd\" (UID: \"2dc783f7-0232-45b6-bd34-15f92db17229\") " pod="calico-system/calico-typha-6d6989645c-7dxsd" Sep 6 00:09:28.426459 kubelet[2465]: I0906 00:09:28.426464 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2dc783f7-0232-45b6-bd34-15f92db17229-typha-certs\") pod \"calico-typha-6d6989645c-7dxsd\" (UID: \"2dc783f7-0232-45b6-bd34-15f92db17229\") " pod="calico-system/calico-typha-6d6989645c-7dxsd" Sep 6 00:09:28.675870 systemd[1]: Created slice kubepods-besteffort-pod9597df6d_e62d_4930_a7a6_09195a936ae5.slice - libcontainer container kubepods-besteffort-pod9597df6d_e62d_4930_a7a6_09195a936ae5.slice. Sep 6 00:09:28.718409 kubelet[2465]: E0906 00:09:28.718372 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:28.719564 containerd[1446]: time="2025-09-06T00:09:28.719528795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6989645c-7dxsd,Uid:2dc783f7-0232-45b6-bd34-15f92db17229,Namespace:calico-system,Attempt:0,}" Sep 6 00:09:28.727870 kubelet[2465]: I0906 00:09:28.727554 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-cni-net-dir\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.727870 kubelet[2465]: I0906 00:09:28.727597 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-flexvol-driver-host\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.727870 kubelet[2465]: I0906 00:09:28.727619 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-cni-bin-dir\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.727870 kubelet[2465]: I0906 00:09:28.727640 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-var-lib-calico\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.727870 kubelet[2465]: I0906 00:09:28.727657 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-xtables-lock\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.728757 kubelet[2465]: I0906 00:09:28.727674 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9597df6d-e62d-4930-a7a6-09195a936ae5-node-certs\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.728757 kubelet[2465]: I0906 00:09:28.727697 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-lib-modules\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.728757 kubelet[2465]: I0906 00:09:28.727728 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-policysync\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.728757 kubelet[2465]: I0906 00:09:28.727759 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9597df6d-e62d-4930-a7a6-09195a936ae5-tigera-ca-bundle\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.728757 kubelet[2465]: I0906 00:09:28.727775 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-cni-log-dir\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.728878 kubelet[2465]: I0906 00:09:28.727790 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9597df6d-e62d-4930-a7a6-09195a936ae5-var-run-calico\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.728878 kubelet[2465]: I0906 00:09:28.727805 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtnh\" (UniqueName: \"kubernetes.io/projected/9597df6d-e62d-4930-a7a6-09195a936ae5-kube-api-access-hjtnh\") pod \"calico-node-7hznx\" (UID: \"9597df6d-e62d-4930-a7a6-09195a936ae5\") " pod="calico-system/calico-node-7hznx" Sep 6 00:09:28.741888 containerd[1446]: time="2025-09-06T00:09:28.741759304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:28.741888 containerd[1446]: time="2025-09-06T00:09:28.741855588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:28.741888 containerd[1446]: time="2025-09-06T00:09:28.741871309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:28.742048 containerd[1446]: time="2025-09-06T00:09:28.741940672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:28.763912 systemd[1]: Started cri-containerd-06e3a86e6b90a05a44e570e87e4e6b90ca7a4c3f73244b33ccedcb60e3256c30.scope - libcontainer container 06e3a86e6b90a05a44e570e87e4e6b90ca7a4c3f73244b33ccedcb60e3256c30. Sep 6 00:09:28.798141 containerd[1446]: time="2025-09-06T00:09:28.798086189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6989645c-7dxsd,Uid:2dc783f7-0232-45b6-bd34-15f92db17229,Namespace:calico-system,Attempt:0,} returns sandbox id \"06e3a86e6b90a05a44e570e87e4e6b90ca7a4c3f73244b33ccedcb60e3256c30\"" Sep 6 00:09:28.798960 kubelet[2465]: E0906 00:09:28.798866 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:28.800166 containerd[1446]: time="2025-09-06T00:09:28.800092395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 00:09:28.830088 kubelet[2465]: E0906 00:09:28.829956 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.830088 kubelet[2465]: W0906 00:09:28.829998 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.836825 kubelet[2465]: E0906 00:09:28.836782 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.837123 kubelet[2465]: E0906 00:09:28.837102 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.837170 kubelet[2465]: W0906 00:09:28.837124 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.837170 kubelet[2465]: E0906 00:09:28.837147 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.837339 kubelet[2465]: E0906 00:09:28.837327 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.837369 kubelet[2465]: W0906 00:09:28.837339 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.837369 kubelet[2465]: E0906 00:09:28.837349 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.837567 kubelet[2465]: E0906 00:09:28.837555 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.837991 kubelet[2465]: W0906 00:09:28.837912 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.837991 kubelet[2465]: E0906 00:09:28.837936 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.838424 kubelet[2465]: E0906 00:09:28.838312 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.838424 kubelet[2465]: W0906 00:09:28.838330 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.838424 kubelet[2465]: E0906 00:09:28.838372 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.838643 kubelet[2465]: E0906 00:09:28.838627 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.838643 kubelet[2465]: W0906 00:09:28.838640 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.838701 kubelet[2465]: E0906 00:09:28.838650 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.838923 kubelet[2465]: E0906 00:09:28.838909 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.838958 kubelet[2465]: W0906 00:09:28.838922 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.838958 kubelet[2465]: E0906 00:09:28.838934 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.839102 kubelet[2465]: E0906 00:09:28.839089 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.839152 kubelet[2465]: W0906 00:09:28.839104 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.839152 kubelet[2465]: E0906 00:09:28.839121 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.842591 kubelet[2465]: E0906 00:09:28.842571 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.842591 kubelet[2465]: W0906 00:09:28.842587 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.842692 kubelet[2465]: E0906 00:09:28.842600 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.842876 kubelet[2465]: E0906 00:09:28.842864 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.842876 kubelet[2465]: W0906 00:09:28.842876 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.842950 kubelet[2465]: E0906 00:09:28.842885 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.843229 kubelet[2465]: E0906 00:09:28.843215 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:28.843229 kubelet[2465]: W0906 00:09:28.843229 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:28.843286 kubelet[2465]: E0906 00:09:28.843238 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:28.921595 kubelet[2465]: E0906 00:09:28.920596 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2plj" podUID="20b69192-52d9-4abd-8a15-9ae7c5a2f6fb" Sep 6 00:09:28.980696 containerd[1446]: time="2025-09-06T00:09:28.980649024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hznx,Uid:9597df6d-e62d-4930-a7a6-09195a936ae5,Namespace:calico-system,Attempt:0,}" Sep 6 00:09:29.003782 containerd[1446]: time="2025-09-06T00:09:29.003654840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:29.003782 containerd[1446]: time="2025-09-06T00:09:29.003714523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:29.003985 containerd[1446]: time="2025-09-06T00:09:29.003895050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:29.004009 containerd[1446]: time="2025-09-06T00:09:29.003988374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:29.016686 kubelet[2465]: E0906 00:09:29.015894 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.016686 kubelet[2465]: W0906 00:09:29.015914 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.016686 kubelet[2465]: E0906 00:09:29.015934 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.017793 kubelet[2465]: E0906 00:09:29.017623 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.021522 kubelet[2465]: W0906 00:09:29.017636 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.021631 kubelet[2465]: E0906 00:09:29.021617 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.021950 kubelet[2465]: E0906 00:09:29.021937 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.022035 kubelet[2465]: W0906 00:09:29.022022 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.022091 kubelet[2465]: E0906 00:09:29.022080 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.022353 kubelet[2465]: E0906 00:09:29.022341 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.022420 kubelet[2465]: W0906 00:09:29.022409 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.022474 kubelet[2465]: E0906 00:09:29.022464 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.022709 kubelet[2465]: E0906 00:09:29.022696 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.022807 kubelet[2465]: W0906 00:09:29.022794 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.022863 kubelet[2465]: E0906 00:09:29.022853 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.023103 kubelet[2465]: E0906 00:09:29.023091 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.023186 kubelet[2465]: W0906 00:09:29.023173 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.023241 kubelet[2465]: E0906 00:09:29.023231 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.023459 kubelet[2465]: E0906 00:09:29.023447 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.023530 kubelet[2465]: W0906 00:09:29.023518 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.023584 kubelet[2465]: E0906 00:09:29.023574 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.023803 kubelet[2465]: E0906 00:09:29.023790 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.023878 kubelet[2465]: W0906 00:09:29.023866 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.023932 kubelet[2465]: E0906 00:09:29.023922 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.024168 kubelet[2465]: E0906 00:09:29.024156 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.024247 kubelet[2465]: W0906 00:09:29.024234 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.024300 kubelet[2465]: E0906 00:09:29.024290 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.024486 kubelet[2465]: E0906 00:09:29.024475 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.024546 kubelet[2465]: W0906 00:09:29.024536 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.024604 kubelet[2465]: E0906 00:09:29.024593 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.025016 systemd[1]: Started cri-containerd-b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85.scope - libcontainer container b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85. Sep 6 00:09:29.025846 kubelet[2465]: E0906 00:09:29.025723 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.025846 kubelet[2465]: W0906 00:09:29.025748 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.025846 kubelet[2465]: E0906 00:09:29.025761 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.026016 kubelet[2465]: E0906 00:09:29.026000 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.026016 kubelet[2465]: W0906 00:09:29.026014 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.026071 kubelet[2465]: E0906 00:09:29.026024 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.026504 kubelet[2465]: E0906 00:09:29.026243 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.026504 kubelet[2465]: W0906 00:09:29.026259 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.026504 kubelet[2465]: E0906 00:09:29.026269 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.026504 kubelet[2465]: E0906 00:09:29.026423 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.026504 kubelet[2465]: W0906 00:09:29.026432 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.026504 kubelet[2465]: E0906 00:09:29.026440 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.026703 kubelet[2465]: E0906 00:09:29.026607 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.026703 kubelet[2465]: W0906 00:09:29.026615 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.026703 kubelet[2465]: E0906 00:09:29.026622 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.026810 kubelet[2465]: E0906 00:09:29.026797 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.026810 kubelet[2465]: W0906 00:09:29.026805 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.026852 kubelet[2465]: E0906 00:09:29.026813 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.027055 kubelet[2465]: E0906 00:09:29.026979 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.027055 kubelet[2465]: W0906 00:09:29.026990 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.027055 kubelet[2465]: E0906 00:09:29.026999 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.027215 kubelet[2465]: E0906 00:09:29.027190 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.027215 kubelet[2465]: W0906 00:09:29.027203 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.027215 kubelet[2465]: E0906 00:09:29.027210 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.027353 kubelet[2465]: E0906 00:09:29.027341 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.027353 kubelet[2465]: W0906 00:09:29.027351 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.027401 kubelet[2465]: E0906 00:09:29.027359 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.027494 kubelet[2465]: E0906 00:09:29.027483 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.027519 kubelet[2465]: W0906 00:09:29.027494 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.027519 kubelet[2465]: E0906 00:09:29.027501 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.031000 kubelet[2465]: E0906 00:09:29.030972 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.031072 kubelet[2465]: W0906 00:09:29.030997 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.031072 kubelet[2465]: E0906 00:09:29.031028 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.031072 kubelet[2465]: I0906 00:09:29.031055 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20b69192-52d9-4abd-8a15-9ae7c5a2f6fb-registration-dir\") pod \"csi-node-driver-r2plj\" (UID: \"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb\") " pod="calico-system/csi-node-driver-r2plj" Sep 6 00:09:29.031337 kubelet[2465]: E0906 00:09:29.031319 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.031381 kubelet[2465]: W0906 00:09:29.031350 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.031381 kubelet[2465]: E0906 00:09:29.031363 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.031427 kubelet[2465]: I0906 00:09:29.031384 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20b69192-52d9-4abd-8a15-9ae7c5a2f6fb-socket-dir\") pod \"csi-node-driver-r2plj\" (UID: \"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb\") " pod="calico-system/csi-node-driver-r2plj" Sep 6 00:09:29.031591 kubelet[2465]: E0906 00:09:29.031577 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.031627 kubelet[2465]: W0906 00:09:29.031610 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.031627 kubelet[2465]: E0906 00:09:29.031621 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.031682 kubelet[2465]: I0906 00:09:29.031642 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/20b69192-52d9-4abd-8a15-9ae7c5a2f6fb-varrun\") pod \"csi-node-driver-r2plj\" (UID: \"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb\") " pod="calico-system/csi-node-driver-r2plj" Sep 6 00:09:29.031957 kubelet[2465]: E0906 00:09:29.031937 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.031957 kubelet[2465]: W0906 00:09:29.031952 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.032039 kubelet[2465]: E0906 00:09:29.031969 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.032039 kubelet[2465]: I0906 00:09:29.032011 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20b69192-52d9-4abd-8a15-9ae7c5a2f6fb-kubelet-dir\") pod \"csi-node-driver-r2plj\" (UID: \"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb\") " pod="calico-system/csi-node-driver-r2plj" Sep 6 00:09:29.032476 kubelet[2465]: E0906 00:09:29.032444 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.032476 kubelet[2465]: W0906 00:09:29.032462 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.032476 kubelet[2465]: E0906 00:09:29.032472 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.032618 kubelet[2465]: I0906 00:09:29.032493 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng7b\" (UniqueName: \"kubernetes.io/projected/20b69192-52d9-4abd-8a15-9ae7c5a2f6fb-kube-api-access-tng7b\") pod \"csi-node-driver-r2plj\" (UID: \"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb\") " pod="calico-system/csi-node-driver-r2plj" Sep 6 00:09:29.033157 kubelet[2465]: E0906 00:09:29.033030 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.033157 kubelet[2465]: W0906 00:09:29.033061 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.033157 kubelet[2465]: E0906 00:09:29.033074 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.033537 kubelet[2465]: E0906 00:09:29.033326 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.033537 kubelet[2465]: W0906 00:09:29.033339 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.033537 kubelet[2465]: E0906 00:09:29.033349 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.033537 kubelet[2465]: E0906 00:09:29.033522 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.033537 kubelet[2465]: W0906 00:09:29.033530 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.033537 kubelet[2465]: E0906 00:09:29.033538 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.034034 kubelet[2465]: E0906 00:09:29.033718 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.034034 kubelet[2465]: W0906 00:09:29.033744 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.034034 kubelet[2465]: E0906 00:09:29.033754 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.034034 kubelet[2465]: E0906 00:09:29.033955 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.034034 kubelet[2465]: W0906 00:09:29.033963 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.034034 kubelet[2465]: E0906 00:09:29.033971 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.034354 kubelet[2465]: E0906 00:09:29.034338 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.034354 kubelet[2465]: W0906 00:09:29.034353 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.034414 kubelet[2465]: E0906 00:09:29.034364 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.034655 kubelet[2465]: E0906 00:09:29.034640 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.034655 kubelet[2465]: W0906 00:09:29.034654 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.034722 kubelet[2465]: E0906 00:09:29.034664 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.034909 kubelet[2465]: E0906 00:09:29.034896 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.034942 kubelet[2465]: W0906 00:09:29.034912 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.034942 kubelet[2465]: E0906 00:09:29.034921 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.035242 kubelet[2465]: E0906 00:09:29.035145 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.035242 kubelet[2465]: W0906 00:09:29.035198 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.035242 kubelet[2465]: E0906 00:09:29.035213 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.036539 kubelet[2465]: E0906 00:09:29.036311 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.036539 kubelet[2465]: W0906 00:09:29.036427 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.036539 kubelet[2465]: E0906 00:09:29.036440 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.066103 containerd[1446]: time="2025-09-06T00:09:29.066058183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hznx,Uid:9597df6d-e62d-4930-a7a6-09195a936ae5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85\"" Sep 6 00:09:29.133517 kubelet[2465]: E0906 00:09:29.133488 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.133517 kubelet[2465]: W0906 00:09:29.133510 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.133667 kubelet[2465]: E0906 00:09:29.133529 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.134460 kubelet[2465]: E0906 00:09:29.134434 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.134460 kubelet[2465]: W0906 00:09:29.134449 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.134460 kubelet[2465]: E0906 00:09:29.134461 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.134800 kubelet[2465]: E0906 00:09:29.134711 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.135667 kubelet[2465]: W0906 00:09:29.134726 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.135734 kubelet[2465]: E0906 00:09:29.135713 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.137623 kubelet[2465]: E0906 00:09:29.137604 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.137623 kubelet[2465]: W0906 00:09:29.137620 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.137775 kubelet[2465]: E0906 00:09:29.137632 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.137896 kubelet[2465]: E0906 00:09:29.137871 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.137926 kubelet[2465]: W0906 00:09:29.137896 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.138098 kubelet[2465]: E0906 00:09:29.137908 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.138946 kubelet[2465]: E0906 00:09:29.138272 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.138946 kubelet[2465]: W0906 00:09:29.138281 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.138946 kubelet[2465]: E0906 00:09:29.138294 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.138946 kubelet[2465]: E0906 00:09:29.138467 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.138946 kubelet[2465]: W0906 00:09:29.138496 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.138946 kubelet[2465]: E0906 00:09:29.138506 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.138946 kubelet[2465]: E0906 00:09:29.138701 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.138946 kubelet[2465]: W0906 00:09:29.138710 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.138946 kubelet[2465]: E0906 00:09:29.138717 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.138946 kubelet[2465]: E0906 00:09:29.138938 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.139355 kubelet[2465]: W0906 00:09:29.138947 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.139355 kubelet[2465]: E0906 00:09:29.138956 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.139355 kubelet[2465]: E0906 00:09:29.139132 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.139355 kubelet[2465]: W0906 00:09:29.139140 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.139355 kubelet[2465]: E0906 00:09:29.139148 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.139621 kubelet[2465]: E0906 00:09:29.139512 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.139621 kubelet[2465]: W0906 00:09:29.139527 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.139621 kubelet[2465]: E0906 00:09:29.139539 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.139975 kubelet[2465]: E0906 00:09:29.139897 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.139975 kubelet[2465]: W0906 00:09:29.139912 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.139975 kubelet[2465]: E0906 00:09:29.139923 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.140392 kubelet[2465]: E0906 00:09:29.140282 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.140392 kubelet[2465]: W0906 00:09:29.140295 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.140392 kubelet[2465]: E0906 00:09:29.140305 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.140535 kubelet[2465]: E0906 00:09:29.140524 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.140631 kubelet[2465]: W0906 00:09:29.140580 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.140631 kubelet[2465]: E0906 00:09:29.140595 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.140931 kubelet[2465]: E0906 00:09:29.140841 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.140931 kubelet[2465]: W0906 00:09:29.140853 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.140931 kubelet[2465]: E0906 00:09:29.140862 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.141181 kubelet[2465]: E0906 00:09:29.141085 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.141181 kubelet[2465]: W0906 00:09:29.141097 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.141181 kubelet[2465]: E0906 00:09:29.141107 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.141456 kubelet[2465]: E0906 00:09:29.141443 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.141574 kubelet[2465]: W0906 00:09:29.141502 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.141574 kubelet[2465]: E0906 00:09:29.141516 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.141843 kubelet[2465]: E0906 00:09:29.141830 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.142041 kubelet[2465]: W0906 00:09:29.141921 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.142041 kubelet[2465]: E0906 00:09:29.141938 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.142464 kubelet[2465]: E0906 00:09:29.142447 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.142464 kubelet[2465]: W0906 00:09:29.142462 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.142576 kubelet[2465]: E0906 00:09:29.142474 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.142639 kubelet[2465]: E0906 00:09:29.142616 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.142639 kubelet[2465]: W0906 00:09:29.142623 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.142639 kubelet[2465]: E0906 00:09:29.142637 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.142827 kubelet[2465]: E0906 00:09:29.142815 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.142827 kubelet[2465]: W0906 00:09:29.142826 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.142877 kubelet[2465]: E0906 00:09:29.142836 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.143017 kubelet[2465]: E0906 00:09:29.143006 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.143017 kubelet[2465]: W0906 00:09:29.143016 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.143075 kubelet[2465]: E0906 00:09:29.143025 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.143270 kubelet[2465]: E0906 00:09:29.143255 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.143299 kubelet[2465]: W0906 00:09:29.143269 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.143299 kubelet[2465]: E0906 00:09:29.143281 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.143484 kubelet[2465]: E0906 00:09:29.143473 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.143484 kubelet[2465]: W0906 00:09:29.143483 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.143548 kubelet[2465]: E0906 00:09:29.143492 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.143653 kubelet[2465]: E0906 00:09:29.143643 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.143653 kubelet[2465]: W0906 00:09:29.143653 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.143713 kubelet[2465]: E0906 00:09:29.143661 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.153329 kubelet[2465]: E0906 00:09:29.153310 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:29.153329 kubelet[2465]: W0906 00:09:29.153326 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:29.153436 kubelet[2465]: E0906 00:09:29.153340 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:29.878156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530543255.mount: Deactivated successfully. Sep 6 00:09:30.490139 kubelet[2465]: E0906 00:09:30.490041 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2plj" podUID="20b69192-52d9-4abd-8a15-9ae7c5a2f6fb" Sep 6 00:09:30.978713 containerd[1446]: time="2025-09-06T00:09:30.978608227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:30.979263 containerd[1446]: time="2025-09-06T00:09:30.979221371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 6 00:09:30.984415 containerd[1446]: time="2025-09-06T00:09:30.984380731Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:30.988721 containerd[1446]: time="2025-09-06T00:09:30.988442490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:30.989707 containerd[1446]: time="2025-09-06T00:09:30.989682898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.189526581s" Sep 6 00:09:30.989879 containerd[1446]: time="2025-09-06T00:09:30.989765541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 6 00:09:30.990960 containerd[1446]: time="2025-09-06T00:09:30.990930346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 00:09:31.006180 containerd[1446]: time="2025-09-06T00:09:31.006033285Z" level=info msg="CreateContainer within sandbox \"06e3a86e6b90a05a44e570e87e4e6b90ca7a4c3f73244b33ccedcb60e3256c30\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 00:09:31.021363 containerd[1446]: time="2025-09-06T00:09:31.021306974Z" level=info msg="CreateContainer within sandbox \"06e3a86e6b90a05a44e570e87e4e6b90ca7a4c3f73244b33ccedcb60e3256c30\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b543096670bdbc07bb4a3b822f026d9ce93d15e5ff9b9d124d00a34701453080\"" Sep 6 00:09:31.021691 containerd[1446]: time="2025-09-06T00:09:31.021672348Z" level=info msg="StartContainer for \"b543096670bdbc07bb4a3b822f026d9ce93d15e5ff9b9d124d00a34701453080\"" Sep 6 00:09:31.051957 systemd[1]: Started cri-containerd-b543096670bdbc07bb4a3b822f026d9ce93d15e5ff9b9d124d00a34701453080.scope - libcontainer container b543096670bdbc07bb4a3b822f026d9ce93d15e5ff9b9d124d00a34701453080. Sep 6 00:09:31.133748 containerd[1446]: time="2025-09-06T00:09:31.133052774Z" level=info msg="StartContainer for \"b543096670bdbc07bb4a3b822f026d9ce93d15e5ff9b9d124d00a34701453080\" returns successfully" Sep 6 00:09:31.559329 kubelet[2465]: E0906 00:09:31.558986 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:31.576982 kubelet[2465]: I0906 00:09:31.576902 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d6989645c-7dxsd" podStartSLOduration=1.38573857 podStartE2EDuration="3.576878496s" podCreationTimestamp="2025-09-06 00:09:28 +0000 UTC" firstStartedPulling="2025-09-06 00:09:28.799613454 +0000 UTC m=+21.384437212" lastFinishedPulling="2025-09-06 00:09:30.99075338 +0000 UTC m=+23.575577138" observedRunningTime="2025-09-06 00:09:31.57643892 +0000 UTC m=+24.161262638" watchObservedRunningTime="2025-09-06 00:09:31.576878496 +0000 UTC m=+24.161702254" Sep 6 00:09:31.645500 kubelet[2465]: E0906 00:09:31.645471 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.645500 kubelet[2465]: W0906 00:09:31.645494 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.645653 kubelet[2465]: E0906 00:09:31.645515 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.645733 kubelet[2465]: E0906 00:09:31.645719 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.645791 kubelet[2465]: W0906 00:09:31.645744 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.645833 kubelet[2465]: E0906 00:09:31.645790 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.645985 kubelet[2465]: E0906 00:09:31.645973 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.646017 kubelet[2465]: W0906 00:09:31.645986 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.646017 kubelet[2465]: E0906 00:09:31.645996 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.646176 kubelet[2465]: E0906 00:09:31.646165 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.646176 kubelet[2465]: W0906 00:09:31.646176 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.646263 kubelet[2465]: E0906 00:09:31.646184 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.646439 kubelet[2465]: E0906 00:09:31.646426 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.646473 kubelet[2465]: W0906 00:09:31.646439 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.646473 kubelet[2465]: E0906 00:09:31.646449 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.646614 kubelet[2465]: E0906 00:09:31.646605 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.646614 kubelet[2465]: W0906 00:09:31.646614 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.646687 kubelet[2465]: E0906 00:09:31.646621 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.646787 kubelet[2465]: E0906 00:09:31.646777 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.646787 kubelet[2465]: W0906 00:09:31.646786 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.646858 kubelet[2465]: E0906 00:09:31.646794 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.646944 kubelet[2465]: E0906 00:09:31.646935 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.646944 kubelet[2465]: W0906 00:09:31.646944 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.647023 kubelet[2465]: E0906 00:09:31.646952 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.647114 kubelet[2465]: E0906 00:09:31.647104 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.647210 kubelet[2465]: W0906 00:09:31.647116 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.647210 kubelet[2465]: E0906 00:09:31.647124 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.647297 kubelet[2465]: E0906 00:09:31.647271 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.647297 kubelet[2465]: W0906 00:09:31.647284 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.647297 kubelet[2465]: E0906 00:09:31.647291 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.647436 kubelet[2465]: E0906 00:09:31.647423 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.647436 kubelet[2465]: W0906 00:09:31.647432 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.647493 kubelet[2465]: E0906 00:09:31.647442 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.647580 kubelet[2465]: E0906 00:09:31.647570 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.647580 kubelet[2465]: W0906 00:09:31.647579 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.647654 kubelet[2465]: E0906 00:09:31.647586 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.647736 kubelet[2465]: E0906 00:09:31.647723 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.647775 kubelet[2465]: W0906 00:09:31.647741 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.647775 kubelet[2465]: E0906 00:09:31.647750 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.647904 kubelet[2465]: E0906 00:09:31.647894 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.647904 kubelet[2465]: W0906 00:09:31.647903 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.647971 kubelet[2465]: E0906 00:09:31.647910 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.648055 kubelet[2465]: E0906 00:09:31.648045 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.648055 kubelet[2465]: W0906 00:09:31.648054 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.648108 kubelet[2465]: E0906 00:09:31.648061 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.656555 kubelet[2465]: E0906 00:09:31.656439 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.656555 kubelet[2465]: W0906 00:09:31.656456 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.656555 kubelet[2465]: E0906 00:09:31.656468 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.656753 kubelet[2465]: E0906 00:09:31.656726 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.656807 kubelet[2465]: W0906 00:09:31.656796 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.656867 kubelet[2465]: E0906 00:09:31.656857 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.657265 kubelet[2465]: E0906 00:09:31.657105 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.657265 kubelet[2465]: W0906 00:09:31.657117 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.657265 kubelet[2465]: E0906 00:09:31.657127 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.657433 kubelet[2465]: E0906 00:09:31.657420 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.657488 kubelet[2465]: W0906 00:09:31.657477 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.657547 kubelet[2465]: E0906 00:09:31.657536 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.657791 kubelet[2465]: E0906 00:09:31.657776 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.657855 kubelet[2465]: W0906 00:09:31.657844 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.658035 kubelet[2465]: E0906 00:09:31.657909 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.658176 kubelet[2465]: E0906 00:09:31.658162 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.658251 kubelet[2465]: W0906 00:09:31.658238 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.658311 kubelet[2465]: E0906 00:09:31.658298 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.658563 kubelet[2465]: E0906 00:09:31.658549 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.658625 kubelet[2465]: W0906 00:09:31.658614 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.658873 kubelet[2465]: E0906 00:09:31.658674 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.659139 kubelet[2465]: E0906 00:09:31.659003 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.659139 kubelet[2465]: W0906 00:09:31.659017 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.659139 kubelet[2465]: E0906 00:09:31.659028 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.659290 kubelet[2465]: E0906 00:09:31.659278 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.659350 kubelet[2465]: W0906 00:09:31.659339 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.659453 kubelet[2465]: E0906 00:09:31.659441 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.659769 kubelet[2465]: E0906 00:09:31.659747 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.659769 kubelet[2465]: W0906 00:09:31.659765 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.659853 kubelet[2465]: E0906 00:09:31.659777 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.659967 kubelet[2465]: E0906 00:09:31.659957 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.659967 kubelet[2465]: W0906 00:09:31.659966 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.660028 kubelet[2465]: E0906 00:09:31.659974 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.660121 kubelet[2465]: E0906 00:09:31.660110 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.660121 kubelet[2465]: W0906 00:09:31.660119 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.660175 kubelet[2465]: E0906 00:09:31.660126 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.660275 kubelet[2465]: E0906 00:09:31.660263 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.660275 kubelet[2465]: W0906 00:09:31.660272 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.660334 kubelet[2465]: E0906 00:09:31.660280 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.660465 kubelet[2465]: E0906 00:09:31.660452 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.660465 kubelet[2465]: W0906 00:09:31.660463 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.660520 kubelet[2465]: E0906 00:09:31.660472 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.660934 kubelet[2465]: E0906 00:09:31.660807 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.660934 kubelet[2465]: W0906 00:09:31.660823 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.660934 kubelet[2465]: E0906 00:09:31.660835 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.661116 kubelet[2465]: E0906 00:09:31.661103 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.661172 kubelet[2465]: W0906 00:09:31.661162 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.661231 kubelet[2465]: E0906 00:09:31.661221 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.661628 kubelet[2465]: E0906 00:09:31.661581 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.661628 kubelet[2465]: W0906 00:09:31.661593 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.661628 kubelet[2465]: E0906 00:09:31.661603 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.666453 kubelet[2465]: E0906 00:09:31.666429 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:09:31.666453 kubelet[2465]: W0906 00:09:31.666444 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:09:31.666453 kubelet[2465]: E0906 00:09:31.666456 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:09:31.894474 containerd[1446]: time="2025-09-06T00:09:31.894367356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:31.896045 containerd[1446]: time="2025-09-06T00:09:31.896010177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 6 00:09:31.897486 containerd[1446]: time="2025-09-06T00:09:31.897433390Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:31.899545 containerd[1446]: time="2025-09-06T00:09:31.899516707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:31.900228 containerd[1446]: time="2025-09-06T00:09:31.900078408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 909.007656ms" Sep 6 00:09:31.900228 containerd[1446]: time="2025-09-06T00:09:31.900110649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 6 00:09:31.914464 containerd[1446]: time="2025-09-06T00:09:31.914431663Z" level=info msg="CreateContainer within sandbox \"b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 00:09:31.925831 containerd[1446]: time="2025-09-06T00:09:31.925786965Z" level=info msg="CreateContainer within sandbox \"b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89\"" Sep 6 00:09:31.926260 containerd[1446]: time="2025-09-06T00:09:31.926237142Z" level=info msg="StartContainer for \"b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89\"" Sep 6 00:09:31.963888 systemd[1]: Started cri-containerd-b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89.scope - libcontainer container b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89. Sep 6 00:09:32.000870 containerd[1446]: time="2025-09-06T00:09:32.000815718Z" level=info msg="StartContainer for \"b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89\" returns successfully" Sep 6 00:09:32.002573 systemd[1]: cri-containerd-b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89.scope: Deactivated successfully. Sep 6 00:09:32.021696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89-rootfs.mount: Deactivated successfully. Sep 6 00:09:32.042917 containerd[1446]: time="2025-09-06T00:09:32.031609935Z" level=info msg="shim disconnected" id=b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89 namespace=k8s.io Sep 6 00:09:32.042917 containerd[1446]: time="2025-09-06T00:09:32.042911898Z" level=warning msg="cleaning up after shim disconnected" id=b3abf2360b83ef15f052ed34dc51fad6c922dbc2c82549fc13e23224c13f7e89 namespace=k8s.io Sep 6 00:09:32.042917 containerd[1446]: time="2025-09-06T00:09:32.042923978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:09:32.490113 kubelet[2465]: E0906 00:09:32.490054 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2plj" podUID="20b69192-52d9-4abd-8a15-9ae7c5a2f6fb" Sep 6 00:09:32.560598 kubelet[2465]: I0906 00:09:32.560572 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:09:32.561138 kubelet[2465]: E0906 00:09:32.560967 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:32.562259 containerd[1446]: time="2025-09-06T00:09:32.562209439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 00:09:34.490340 kubelet[2465]: E0906 00:09:34.490282 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2plj" podUID="20b69192-52d9-4abd-8a15-9ae7c5a2f6fb" Sep 6 00:09:35.431528 containerd[1446]: time="2025-09-06T00:09:35.431489280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:35.432292 containerd[1446]: time="2025-09-06T00:09:35.432257264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 6 00:09:35.433441 containerd[1446]: time="2025-09-06T00:09:35.433134492Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:35.435550 containerd[1446]: time="2025-09-06T00:09:35.435392643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:35.435892 containerd[1446]: time="2025-09-06T00:09:35.435864497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.873606456s" Sep 6 00:09:35.435934 containerd[1446]: time="2025-09-06T00:09:35.435896498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 6 00:09:35.440751 containerd[1446]: time="2025-09-06T00:09:35.440673528Z" level=info msg="CreateContainer within sandbox \"b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 00:09:35.458821 containerd[1446]: time="2025-09-06T00:09:35.458785657Z" level=info msg="CreateContainer within sandbox \"b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957\"" Sep 6 00:09:35.459219 containerd[1446]: time="2025-09-06T00:09:35.459189230Z" level=info msg="StartContainer for \"76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957\"" Sep 6 00:09:35.489918 systemd[1]: Started cri-containerd-76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957.scope - libcontainer container 76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957. Sep 6 00:09:35.517131 containerd[1446]: time="2025-09-06T00:09:35.517087968Z" level=info msg="StartContainer for \"76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957\" returns successfully" Sep 6 00:09:36.089998 systemd[1]: cri-containerd-76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957.scope: Deactivated successfully. Sep 6 00:09:36.109822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957-rootfs.mount: Deactivated successfully. Sep 6 00:09:36.149961 kubelet[2465]: I0906 00:09:36.149932 2465 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:09:36.163111 containerd[1446]: time="2025-09-06T00:09:36.163056733Z" level=info msg="shim disconnected" id=76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957 namespace=k8s.io Sep 6 00:09:36.164279 containerd[1446]: time="2025-09-06T00:09:36.164103565Z" level=warning msg="cleaning up after shim disconnected" id=76e237c39fc5b68f51b0d35e1699ae787cb6df1c1b98c19de6c446ae00ee7957 namespace=k8s.io Sep 6 00:09:36.164279 containerd[1446]: time="2025-09-06T00:09:36.164125486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:09:36.207181 systemd[1]: Created slice kubepods-besteffort-pod28c61605_48bf_420c_9e67_dd95e49a735f.slice - libcontainer container kubepods-besteffort-pod28c61605_48bf_420c_9e67_dd95e49a735f.slice. Sep 6 00:09:36.227797 systemd[1]: Created slice kubepods-burstable-pod8fe42dcf_c7fe_4f83_b5d8_07f73b02320a.slice - libcontainer container kubepods-burstable-pod8fe42dcf_c7fe_4f83_b5d8_07f73b02320a.slice. Sep 6 00:09:36.235436 systemd[1]: Created slice kubepods-burstable-pod74911617_3e52_41e1_a84e_8da0e31464f5.slice - libcontainer container kubepods-burstable-pod74911617_3e52_41e1_a84e_8da0e31464f5.slice. Sep 6 00:09:36.242573 systemd[1]: Created slice kubepods-besteffort-pod57617a63_d31f_472d_91f1_03fac276c695.slice - libcontainer container kubepods-besteffort-pod57617a63_d31f_472d_91f1_03fac276c695.slice. Sep 6 00:09:36.249210 systemd[1]: Created slice kubepods-besteffort-pod310f0b3e_872d_47ec_bab7_107416283ff9.slice - libcontainer container kubepods-besteffort-pod310f0b3e_872d_47ec_bab7_107416283ff9.slice. Sep 6 00:09:36.255230 systemd[1]: Created slice kubepods-besteffort-podb270af4a_cf4b_43ff_ae1a_ec07098c6468.slice - libcontainer container kubepods-besteffort-podb270af4a_cf4b_43ff_ae1a_ec07098c6468.slice. Sep 6 00:09:36.262150 systemd[1]: Created slice kubepods-besteffort-pod99be312a_ebdb_437e_9c29_9b53e7c2283e.slice - libcontainer container kubepods-besteffort-pod99be312a_ebdb_437e_9c29_9b53e7c2283e.slice. Sep 6 00:09:36.290018 kubelet[2465]: I0906 00:09:36.289961 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-backend-key-pair\") pod \"whisker-5b585579bb-pgk4g\" (UID: \"99be312a-ebdb-437e-9c29-9b53e7c2283e\") " pod="calico-system/whisker-5b585579bb-pgk4g" Sep 6 00:09:36.290018 kubelet[2465]: I0906 00:09:36.290005 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmvg\" (UniqueName: \"kubernetes.io/projected/99be312a-ebdb-437e-9c29-9b53e7c2283e-kube-api-access-9pmvg\") pod \"whisker-5b585579bb-pgk4g\" (UID: \"99be312a-ebdb-437e-9c29-9b53e7c2283e\") " pod="calico-system/whisker-5b585579bb-pgk4g" Sep 6 00:09:36.293184 kubelet[2465]: I0906 00:09:36.290024 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwl9j\" (UniqueName: \"kubernetes.io/projected/b270af4a-cf4b-43ff-ae1a-ec07098c6468-kube-api-access-lwl9j\") pod \"goldmane-54d579b49d-cdhv4\" (UID: \"b270af4a-cf4b-43ff-ae1a-ec07098c6468\") " pod="calico-system/goldmane-54d579b49d-cdhv4" Sep 6 00:09:36.293184 kubelet[2465]: I0906 00:09:36.293183 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74911617-3e52-41e1-a84e-8da0e31464f5-config-volume\") pod \"coredns-674b8bbfcf-msd89\" (UID: \"74911617-3e52-41e1-a84e-8da0e31464f5\") " pod="kube-system/coredns-674b8bbfcf-msd89" Sep 6 00:09:36.293546 kubelet[2465]: I0906 00:09:36.293206 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-ca-bundle\") pod \"whisker-5b585579bb-pgk4g\" (UID: \"99be312a-ebdb-437e-9c29-9b53e7c2283e\") " pod="calico-system/whisker-5b585579bb-pgk4g" Sep 6 00:09:36.293546 kubelet[2465]: I0906 00:09:36.293224 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/310f0b3e-872d-47ec-bab7-107416283ff9-calico-apiserver-certs\") pod \"calico-apiserver-57978c77c8-2bgst\" (UID: \"310f0b3e-872d-47ec-bab7-107416283ff9\") " pod="calico-apiserver/calico-apiserver-57978c77c8-2bgst" Sep 6 00:09:36.293546 kubelet[2465]: I0906 00:09:36.293240 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28c61605-48bf-420c-9e67-dd95e49a735f-calico-apiserver-certs\") pod \"calico-apiserver-57978c77c8-87cm2\" (UID: \"28c61605-48bf-420c-9e67-dd95e49a735f\") " pod="calico-apiserver/calico-apiserver-57978c77c8-87cm2" Sep 6 00:09:36.293546 kubelet[2465]: I0906 00:09:36.293259 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fe42dcf-c7fe-4f83-b5d8-07f73b02320a-config-volume\") pod \"coredns-674b8bbfcf-jn4fk\" (UID: \"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a\") " pod="kube-system/coredns-674b8bbfcf-jn4fk" Sep 6 00:09:36.293546 kubelet[2465]: I0906 00:09:36.293313 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p5zq\" (UniqueName: \"kubernetes.io/projected/310f0b3e-872d-47ec-bab7-107416283ff9-kube-api-access-5p5zq\") pod \"calico-apiserver-57978c77c8-2bgst\" (UID: \"310f0b3e-872d-47ec-bab7-107416283ff9\") " pod="calico-apiserver/calico-apiserver-57978c77c8-2bgst" Sep 6 00:09:36.293667 kubelet[2465]: I0906 00:09:36.293395 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b270af4a-cf4b-43ff-ae1a-ec07098c6468-config\") pod \"goldmane-54d579b49d-cdhv4\" (UID: \"b270af4a-cf4b-43ff-ae1a-ec07098c6468\") " pod="calico-system/goldmane-54d579b49d-cdhv4" Sep 6 00:09:36.293667 kubelet[2465]: I0906 00:09:36.293431 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57617a63-d31f-472d-91f1-03fac276c695-tigera-ca-bundle\") pod \"calico-kube-controllers-5cd46bb99-vsnrh\" (UID: \"57617a63-d31f-472d-91f1-03fac276c695\") " pod="calico-system/calico-kube-controllers-5cd46bb99-vsnrh" Sep 6 00:09:36.293667 kubelet[2465]: I0906 00:09:36.293481 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b270af4a-cf4b-43ff-ae1a-ec07098c6468-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-cdhv4\" (UID: \"b270af4a-cf4b-43ff-ae1a-ec07098c6468\") " pod="calico-system/goldmane-54d579b49d-cdhv4" Sep 6 00:09:36.293667 kubelet[2465]: I0906 00:09:36.293521 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwtfc\" (UniqueName: \"kubernetes.io/projected/28c61605-48bf-420c-9e67-dd95e49a735f-kube-api-access-rwtfc\") pod \"calico-apiserver-57978c77c8-87cm2\" (UID: \"28c61605-48bf-420c-9e67-dd95e49a735f\") " pod="calico-apiserver/calico-apiserver-57978c77c8-87cm2" Sep 6 00:09:36.293667 kubelet[2465]: I0906 00:09:36.293550 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b270af4a-cf4b-43ff-ae1a-ec07098c6468-goldmane-key-pair\") pod \"goldmane-54d579b49d-cdhv4\" (UID: \"b270af4a-cf4b-43ff-ae1a-ec07098c6468\") " pod="calico-system/goldmane-54d579b49d-cdhv4" Sep 6 00:09:36.293798 kubelet[2465]: I0906 00:09:36.293565 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bzc\" (UniqueName: \"kubernetes.io/projected/57617a63-d31f-472d-91f1-03fac276c695-kube-api-access-w6bzc\") pod \"calico-kube-controllers-5cd46bb99-vsnrh\" (UID: \"57617a63-d31f-472d-91f1-03fac276c695\") " pod="calico-system/calico-kube-controllers-5cd46bb99-vsnrh" Sep 6 00:09:36.293798 kubelet[2465]: I0906 00:09:36.293584 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz7ln\" (UniqueName: \"kubernetes.io/projected/8fe42dcf-c7fe-4f83-b5d8-07f73b02320a-kube-api-access-cz7ln\") pod \"coredns-674b8bbfcf-jn4fk\" (UID: \"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a\") " pod="kube-system/coredns-674b8bbfcf-jn4fk" Sep 6 00:09:36.293798 kubelet[2465]: I0906 00:09:36.293599 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwhq\" (UniqueName: \"kubernetes.io/projected/74911617-3e52-41e1-a84e-8da0e31464f5-kube-api-access-jhwhq\") pod \"coredns-674b8bbfcf-msd89\" (UID: \"74911617-3e52-41e1-a84e-8da0e31464f5\") " pod="kube-system/coredns-674b8bbfcf-msd89" Sep 6 00:09:36.496260 systemd[1]: Created slice kubepods-besteffort-pod20b69192_52d9_4abd_8a15_9ae7c5a2f6fb.slice - libcontainer container kubepods-besteffort-pod20b69192_52d9_4abd_8a15_9ae7c5a2f6fb.slice. Sep 6 00:09:36.498437 containerd[1446]: time="2025-09-06T00:09:36.498400010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2plj,Uid:20b69192-52d9-4abd-8a15-9ae7c5a2f6fb,Namespace:calico-system,Attempt:0,}" Sep 6 00:09:36.517394 containerd[1446]: time="2025-09-06T00:09:36.517318741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-87cm2,Uid:28c61605-48bf-420c-9e67-dd95e49a735f,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:09:36.533238 kubelet[2465]: E0906 00:09:36.533205 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:36.534090 containerd[1446]: time="2025-09-06T00:09:36.533916642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jn4fk,Uid:8fe42dcf-c7fe-4f83-b5d8-07f73b02320a,Namespace:kube-system,Attempt:0,}" Sep 6 00:09:36.540868 kubelet[2465]: E0906 00:09:36.540831 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:36.541603 containerd[1446]: time="2025-09-06T00:09:36.541545112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msd89,Uid:74911617-3e52-41e1-a84e-8da0e31464f5,Namespace:kube-system,Attempt:0,}" Sep 6 00:09:36.550675 containerd[1446]: time="2025-09-06T00:09:36.550637266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd46bb99-vsnrh,Uid:57617a63-d31f-472d-91f1-03fac276c695,Namespace:calico-system,Attempt:0,}" Sep 6 00:09:36.566902 containerd[1446]: time="2025-09-06T00:09:36.566849355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b585579bb-pgk4g,Uid:99be312a-ebdb-437e-9c29-9b53e7c2283e,Namespace:calico-system,Attempt:0,}" Sep 6 00:09:36.568035 containerd[1446]: time="2025-09-06T00:09:36.568000910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cdhv4,Uid:b270af4a-cf4b-43ff-ae1a-ec07098c6468,Namespace:calico-system,Attempt:0,}" Sep 6 00:09:36.570779 containerd[1446]: time="2025-09-06T00:09:36.569709721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-2bgst,Uid:310f0b3e-872d-47ec-bab7-107416283ff9,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:09:36.600813 containerd[1446]: time="2025-09-06T00:09:36.600677416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 00:09:36.632788 containerd[1446]: time="2025-09-06T00:09:36.632696902Z" level=error msg="Failed to destroy network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.633092 containerd[1446]: time="2025-09-06T00:09:36.633058832Z" level=error msg="encountered an error cleaning up failed sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.633145 containerd[1446]: time="2025-09-06T00:09:36.633111194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2plj,Uid:20b69192-52d9-4abd-8a15-9ae7c5a2f6fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.640882 containerd[1446]: time="2025-09-06T00:09:36.640843307Z" level=error msg="Failed to destroy network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.641158 containerd[1446]: time="2025-09-06T00:09:36.641124796Z" level=error msg="encountered an error cleaning up failed sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.641199 containerd[1446]: time="2025-09-06T00:09:36.641174557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jn4fk,Uid:8fe42dcf-c7fe-4f83-b5d8-07f73b02320a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.643595 kubelet[2465]: E0906 00:09:36.643534 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.643723 kubelet[2465]: E0906 00:09:36.643615 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jn4fk" Sep 6 00:09:36.643723 kubelet[2465]: E0906 00:09:36.643635 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jn4fk" Sep 6 00:09:36.644047 kubelet[2465]: E0906 00:09:36.643765 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jn4fk_kube-system(8fe42dcf-c7fe-4f83-b5d8-07f73b02320a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jn4fk_kube-system(8fe42dcf-c7fe-4f83-b5d8-07f73b02320a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jn4fk" podUID="8fe42dcf-c7fe-4f83-b5d8-07f73b02320a" Sep 6 00:09:36.645110 kubelet[2465]: E0906 00:09:36.645051 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.645195 kubelet[2465]: E0906 00:09:36.645123 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2plj" Sep 6 00:09:36.645195 kubelet[2465]: E0906 00:09:36.645143 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2plj" Sep 6 00:09:36.645357 kubelet[2465]: E0906 00:09:36.645317 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r2plj_calico-system(20b69192-52d9-4abd-8a15-9ae7c5a2f6fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r2plj_calico-system(20b69192-52d9-4abd-8a15-9ae7c5a2f6fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2plj" podUID="20b69192-52d9-4abd-8a15-9ae7c5a2f6fb" Sep 6 00:09:36.660840 containerd[1446]: time="2025-09-06T00:09:36.660785469Z" level=error msg="Failed to destroy network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.661193 containerd[1446]: time="2025-09-06T00:09:36.661159400Z" level=error msg="encountered an error cleaning up failed sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.661262 containerd[1446]: time="2025-09-06T00:09:36.661212722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-87cm2,Uid:28c61605-48bf-420c-9e67-dd95e49a735f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.661487 kubelet[2465]: E0906 00:09:36.661449 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.661539 kubelet[2465]: E0906 00:09:36.661512 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57978c77c8-87cm2" Sep 6 00:09:36.661539 kubelet[2465]: E0906 00:09:36.661531 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57978c77c8-87cm2" Sep 6 00:09:36.661608 kubelet[2465]: E0906 00:09:36.661580 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57978c77c8-87cm2_calico-apiserver(28c61605-48bf-420c-9e67-dd95e49a735f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57978c77c8-87cm2_calico-apiserver(28c61605-48bf-420c-9e67-dd95e49a735f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57978c77c8-87cm2" podUID="28c61605-48bf-420c-9e67-dd95e49a735f" Sep 6 00:09:36.681249 containerd[1446]: time="2025-09-06T00:09:36.681188844Z" level=error msg="Failed to destroy network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.681592 containerd[1446]: time="2025-09-06T00:09:36.681560616Z" level=error msg="encountered an error cleaning up failed sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.681656 containerd[1446]: time="2025-09-06T00:09:36.681613177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msd89,Uid:74911617-3e52-41e1-a84e-8da0e31464f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.682188 kubelet[2465]: E0906 00:09:36.681885 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.682188 kubelet[2465]: E0906 00:09:36.681983 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-msd89" Sep 6 00:09:36.682188 kubelet[2465]: E0906 00:09:36.682005 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-msd89" Sep 6 00:09:36.682341 kubelet[2465]: E0906 00:09:36.682058 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-msd89_kube-system(74911617-3e52-41e1-a84e-8da0e31464f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-msd89_kube-system(74911617-3e52-41e1-a84e-8da0e31464f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-msd89" podUID="74911617-3e52-41e1-a84e-8da0e31464f5" Sep 6 00:09:36.701246 containerd[1446]: time="2025-09-06T00:09:36.701197368Z" level=error msg="Failed to destroy network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.701552 containerd[1446]: time="2025-09-06T00:09:36.701526258Z" level=error msg="encountered an error cleaning up failed sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.701628 containerd[1446]: time="2025-09-06T00:09:36.701577740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd46bb99-vsnrh,Uid:57617a63-d31f-472d-91f1-03fac276c695,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.702969 kubelet[2465]: E0906 00:09:36.701785 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.702969 kubelet[2465]: E0906 00:09:36.701837 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cd46bb99-vsnrh" Sep 6 00:09:36.702969 kubelet[2465]: E0906 00:09:36.701856 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cd46bb99-vsnrh" Sep 6 00:09:36.703145 kubelet[2465]: E0906 00:09:36.701905 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cd46bb99-vsnrh_calico-system(57617a63-d31f-472d-91f1-03fac276c695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cd46bb99-vsnrh_calico-system(57617a63-d31f-472d-91f1-03fac276c695)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cd46bb99-vsnrh" podUID="57617a63-d31f-472d-91f1-03fac276c695" Sep 6 00:09:36.714715 containerd[1446]: time="2025-09-06T00:09:36.713826829Z" level=error msg="Failed to destroy network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.714715 containerd[1446]: time="2025-09-06T00:09:36.714137238Z" level=error msg="encountered an error cleaning up failed sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.714715 containerd[1446]: time="2025-09-06T00:09:36.714189880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b585579bb-pgk4g,Uid:99be312a-ebdb-437e-9c29-9b53e7c2283e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.715011 kubelet[2465]: E0906 00:09:36.714377 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.715011 kubelet[2465]: E0906 00:09:36.714425 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b585579bb-pgk4g" Sep 6 00:09:36.715011 kubelet[2465]: E0906 00:09:36.714444 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b585579bb-pgk4g" Sep 6 00:09:36.715113 kubelet[2465]: E0906 00:09:36.714484 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b585579bb-pgk4g_calico-system(99be312a-ebdb-437e-9c29-9b53e7c2283e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b585579bb-pgk4g_calico-system(99be312a-ebdb-437e-9c29-9b53e7c2283e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b585579bb-pgk4g" podUID="99be312a-ebdb-437e-9c29-9b53e7c2283e" Sep 6 00:09:36.717380 containerd[1446]: time="2025-09-06T00:09:36.717334135Z" level=error msg="Failed to destroy network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.717704 containerd[1446]: time="2025-09-06T00:09:36.717676185Z" level=error msg="encountered an error cleaning up failed sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.717836 containerd[1446]: time="2025-09-06T00:09:36.717813789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cdhv4,Uid:b270af4a-cf4b-43ff-ae1a-ec07098c6468,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.718878 kubelet[2465]: E0906 00:09:36.718852 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.719012 kubelet[2465]: E0906 00:09:36.718995 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-cdhv4" Sep 6 00:09:36.719101 kubelet[2465]: E0906 00:09:36.719085 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-cdhv4" Sep 6 00:09:36.719230 kubelet[2465]: E0906 00:09:36.719193 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-cdhv4_calico-system(b270af4a-cf4b-43ff-ae1a-ec07098c6468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-cdhv4_calico-system(b270af4a-cf4b-43ff-ae1a-ec07098c6468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-cdhv4" podUID="b270af4a-cf4b-43ff-ae1a-ec07098c6468" Sep 6 00:09:36.731075 containerd[1446]: time="2025-09-06T00:09:36.731030988Z" level=error msg="Failed to destroy network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.731425 containerd[1446]: time="2025-09-06T00:09:36.731397079Z" level=error msg="encountered an error cleaning up failed sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.731472 containerd[1446]: time="2025-09-06T00:09:36.731448921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-2bgst,Uid:310f0b3e-872d-47ec-bab7-107416283ff9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.731703 kubelet[2465]: E0906 00:09:36.731662 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:36.731775 kubelet[2465]: E0906 00:09:36.731726 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57978c77c8-2bgst" Sep 6 00:09:36.731775 kubelet[2465]: E0906 00:09:36.731762 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57978c77c8-2bgst" Sep 6 00:09:36.731886 kubelet[2465]: E0906 00:09:36.731807 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57978c77c8-2bgst_calico-apiserver(310f0b3e-872d-47ec-bab7-107416283ff9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57978c77c8-2bgst_calico-apiserver(310f0b3e-872d-47ec-bab7-107416283ff9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57978c77c8-2bgst" podUID="310f0b3e-872d-47ec-bab7-107416283ff9" Sep 6 00:09:37.457436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f-shm.mount: Deactivated successfully. Sep 6 00:09:37.457534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9-shm.mount: Deactivated successfully. Sep 6 00:09:37.595059 kubelet[2465]: I0906 00:09:37.594949 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:09:37.596121 containerd[1446]: time="2025-09-06T00:09:37.596063675Z" level=info msg="StopPodSandbox for \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\"" Sep 6 00:09:37.597097 containerd[1446]: time="2025-09-06T00:09:37.596925660Z" level=info msg="Ensure that sandbox 5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1 in task-service has been cleanup successfully" Sep 6 00:09:37.598386 kubelet[2465]: I0906 00:09:37.597726 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:09:37.598529 containerd[1446]: time="2025-09-06T00:09:37.598475025Z" level=info msg="StopPodSandbox for \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\"" Sep 6 00:09:37.598656 kubelet[2465]: I0906 00:09:37.598622 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:09:37.599066 containerd[1446]: time="2025-09-06T00:09:37.599034201Z" level=info msg="StopPodSandbox for \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\"" Sep 6 00:09:37.599198 containerd[1446]: time="2025-09-06T00:09:37.599172725Z" level=info msg="Ensure that sandbox 5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2 in task-service has been cleanup successfully" Sep 6 00:09:37.600084 containerd[1446]: time="2025-09-06T00:09:37.600046550Z" level=info msg="Ensure that sandbox 4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f in task-service has been cleanup successfully" Sep 6 00:09:37.601133 kubelet[2465]: I0906 00:09:37.601107 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:09:37.602880 containerd[1446]: time="2025-09-06T00:09:37.602838591Z" level=info msg="StopPodSandbox for \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\"" Sep 6 00:09:37.603211 containerd[1446]: time="2025-09-06T00:09:37.602989956Z" level=info msg="Ensure that sandbox 6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878 in task-service has been cleanup successfully" Sep 6 00:09:37.603753 kubelet[2465]: I0906 00:09:37.603710 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:09:37.604262 containerd[1446]: time="2025-09-06T00:09:37.604239152Z" level=info msg="StopPodSandbox for \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\"" Sep 6 00:09:37.604798 containerd[1446]: time="2025-09-06T00:09:37.604774968Z" level=info msg="Ensure that sandbox 42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1 in task-service has been cleanup successfully" Sep 6 00:09:37.606022 kubelet[2465]: I0906 00:09:37.605990 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:09:37.608679 kubelet[2465]: I0906 00:09:37.608649 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:09:37.609835 containerd[1446]: time="2025-09-06T00:09:37.608166986Z" level=info msg="StopPodSandbox for \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\"" Sep 6 00:09:37.609835 containerd[1446]: time="2025-09-06T00:09:37.609211456Z" level=info msg="StopPodSandbox for \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\"" Sep 6 00:09:37.609835 containerd[1446]: time="2025-09-06T00:09:37.609687670Z" level=info msg="Ensure that sandbox 936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9 in task-service has been cleanup successfully" Sep 6 00:09:37.609835 containerd[1446]: time="2025-09-06T00:09:37.609823154Z" level=info msg="Ensure that sandbox a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2 in task-service has been cleanup successfully" Sep 6 00:09:37.612506 kubelet[2465]: I0906 00:09:37.612464 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:09:37.613073 containerd[1446]: time="2025-09-06T00:09:37.612948525Z" level=info msg="StopPodSandbox for \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\"" Sep 6 00:09:37.613958 containerd[1446]: time="2025-09-06T00:09:37.613931953Z" level=info msg="Ensure that sandbox 2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403 in task-service has been cleanup successfully" Sep 6 00:09:37.658422 containerd[1446]: time="2025-09-06T00:09:37.658369522Z" level=error msg="StopPodSandbox for \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\" failed" error="failed to destroy network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.658887 kubelet[2465]: E0906 00:09:37.658806 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:09:37.658887 kubelet[2465]: E0906 00:09:37.658867 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1"} Sep 6 00:09:37.659055 kubelet[2465]: E0906 00:09:37.658920 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.659055 kubelet[2465]: E0906 00:09:37.658944 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jn4fk" podUID="8fe42dcf-c7fe-4f83-b5d8-07f73b02320a" Sep 6 00:09:37.662908 containerd[1446]: time="2025-09-06T00:09:37.662794651Z" level=error msg="StopPodSandbox for \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\" failed" error="failed to destroy network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.663001 kubelet[2465]: E0906 00:09:37.662951 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:09:37.663001 kubelet[2465]: E0906 00:09:37.662987 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878"} Sep 6 00:09:37.663058 kubelet[2465]: E0906 00:09:37.663013 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99be312a-ebdb-437e-9c29-9b53e7c2283e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.663058 kubelet[2465]: E0906 00:09:37.663031 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99be312a-ebdb-437e-9c29-9b53e7c2283e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b585579bb-pgk4g" podUID="99be312a-ebdb-437e-9c29-9b53e7c2283e" Sep 6 00:09:37.671053 containerd[1446]: time="2025-09-06T00:09:37.670563076Z" level=error msg="StopPodSandbox for \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\" failed" error="failed to destroy network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.671138 kubelet[2465]: E0906 00:09:37.670801 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:09:37.671138 kubelet[2465]: E0906 00:09:37.670839 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2"} Sep 6 00:09:37.671138 kubelet[2465]: E0906 00:09:37.670868 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"310f0b3e-872d-47ec-bab7-107416283ff9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.671138 kubelet[2465]: E0906 00:09:37.670885 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"310f0b3e-872d-47ec-bab7-107416283ff9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57978c77c8-2bgst" podUID="310f0b3e-872d-47ec-bab7-107416283ff9" Sep 6 00:09:37.677025 containerd[1446]: time="2025-09-06T00:09:37.676967582Z" level=error msg="StopPodSandbox for \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\" failed" error="failed to destroy network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.677274 kubelet[2465]: E0906 00:09:37.677188 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:09:37.677274 kubelet[2465]: E0906 00:09:37.677234 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1"} Sep 6 00:09:37.677274 kubelet[2465]: E0906 00:09:37.677268 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74911617-3e52-41e1-a84e-8da0e31464f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.677423 kubelet[2465]: E0906 00:09:37.677288 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74911617-3e52-41e1-a84e-8da0e31464f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-msd89" podUID="74911617-3e52-41e1-a84e-8da0e31464f5" Sep 6 00:09:37.684218 containerd[1446]: time="2025-09-06T00:09:37.684113069Z" level=error msg="StopPodSandbox for \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\" failed" error="failed to destroy network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.684357 kubelet[2465]: E0906 00:09:37.684319 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:09:37.684522 kubelet[2465]: E0906 00:09:37.684367 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f"} Sep 6 00:09:37.684522 kubelet[2465]: E0906 00:09:37.684403 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28c61605-48bf-420c-9e67-dd95e49a735f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.684522 kubelet[2465]: E0906 00:09:37.684427 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28c61605-48bf-420c-9e67-dd95e49a735f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57978c77c8-87cm2" podUID="28c61605-48bf-420c-9e67-dd95e49a735f" Sep 6 00:09:37.684904 containerd[1446]: time="2025-09-06T00:09:37.684864531Z" level=error msg="StopPodSandbox for \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\" failed" error="failed to destroy network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.685053 kubelet[2465]: E0906 00:09:37.685013 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:09:37.685053 kubelet[2465]: E0906 00:09:37.685048 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9"} Sep 6 00:09:37.685175 kubelet[2465]: E0906 00:09:37.685068 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.685175 kubelet[2465]: E0906 00:09:37.685085 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2plj" podUID="20b69192-52d9-4abd-8a15-9ae7c5a2f6fb" Sep 6 00:09:37.686995 containerd[1446]: time="2025-09-06T00:09:37.686940671Z" level=error msg="StopPodSandbox for \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\" failed" error="failed to destroy network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.687181 kubelet[2465]: E0906 00:09:37.687113 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:09:37.687221 kubelet[2465]: E0906 00:09:37.687192 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403"} Sep 6 00:09:37.687243 kubelet[2465]: E0906 00:09:37.687219 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57617a63-d31f-472d-91f1-03fac276c695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.687290 kubelet[2465]: E0906 00:09:37.687238 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57617a63-d31f-472d-91f1-03fac276c695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cd46bb99-vsnrh" podUID="57617a63-d31f-472d-91f1-03fac276c695" Sep 6 00:09:37.699292 containerd[1446]: time="2025-09-06T00:09:37.699197147Z" level=error msg="StopPodSandbox for \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\" failed" error="failed to destroy network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:09:37.699493 kubelet[2465]: E0906 00:09:37.699448 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:09:37.699530 kubelet[2465]: E0906 00:09:37.699497 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2"} Sep 6 00:09:37.699530 kubelet[2465]: E0906 00:09:37.699526 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b270af4a-cf4b-43ff-ae1a-ec07098c6468\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:09:37.699608 kubelet[2465]: E0906 00:09:37.699545 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b270af4a-cf4b-43ff-ae1a-ec07098c6468\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-cdhv4" podUID="b270af4a-cf4b-43ff-ae1a-ec07098c6468" Sep 6 00:09:40.288536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362347260.mount: Deactivated successfully. Sep 6 00:09:40.532282 containerd[1446]: time="2025-09-06T00:09:40.532147914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:40.533074 containerd[1446]: time="2025-09-06T00:09:40.532661527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 6 00:09:40.533578 containerd[1446]: time="2025-09-06T00:09:40.533516029Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:40.536153 containerd[1446]: time="2025-09-06T00:09:40.536122177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:40.536791 containerd[1446]: time="2025-09-06T00:09:40.536762553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.936044297s" Sep 6 00:09:40.536862 containerd[1446]: time="2025-09-06T00:09:40.536794034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 6 00:09:40.555802 containerd[1446]: time="2025-09-06T00:09:40.555678084Z" level=info msg="CreateContainer within sandbox \"b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 00:09:40.568848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173086225.mount: Deactivated successfully. Sep 6 00:09:40.572453 containerd[1446]: time="2025-09-06T00:09:40.572349757Z" level=info msg="CreateContainer within sandbox \"b729bf9ecd642466cf6987e49fed2d3a45723f0bcff2864167240e19ccbc2c85\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"05311cb4c11ef0ce9b054a5c36d5ac1816e356f4d936e4b31c170a71c73a2a12\"" Sep 6 00:09:40.572888 containerd[1446]: time="2025-09-06T00:09:40.572862450Z" level=info msg="StartContainer for \"05311cb4c11ef0ce9b054a5c36d5ac1816e356f4d936e4b31c170a71c73a2a12\"" Sep 6 00:09:40.631887 systemd[1]: Started cri-containerd-05311cb4c11ef0ce9b054a5c36d5ac1816e356f4d936e4b31c170a71c73a2a12.scope - libcontainer container 05311cb4c11ef0ce9b054a5c36d5ac1816e356f4d936e4b31c170a71c73a2a12. Sep 6 00:09:40.656532 containerd[1446]: time="2025-09-06T00:09:40.656490700Z" level=info msg="StartContainer for \"05311cb4c11ef0ce9b054a5c36d5ac1816e356f4d936e4b31c170a71c73a2a12\" returns successfully" Sep 6 00:09:40.770086 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 00:09:40.770213 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 00:09:40.867399 containerd[1446]: time="2025-09-06T00:09:40.867276850Z" level=info msg="StopPodSandbox for \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\"" Sep 6 00:09:40.870590 kubelet[2465]: I0906 00:09:40.870547 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:09:40.871077 kubelet[2465]: E0906 00:09:40.870876 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:40.979 [INFO][3787] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:40.980 [INFO][3787] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" iface="eth0" netns="/var/run/netns/cni-c1781c5f-929c-0013-181d-c7cfabbcbbd6" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:40.980 [INFO][3787] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" iface="eth0" netns="/var/run/netns/cni-c1781c5f-929c-0013-181d-c7cfabbcbbd6" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:40.981 [INFO][3787] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" iface="eth0" netns="/var/run/netns/cni-c1781c5f-929c-0013-181d-c7cfabbcbbd6" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:40.981 [INFO][3787] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:40.981 [INFO][3787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:41.043 [INFO][3798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:41.043 [INFO][3798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:41.043 [INFO][3798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:41.054 [WARNING][3798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:41.054 [INFO][3798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:41.056 [INFO][3798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:41.061310 containerd[1446]: 2025-09-06 00:09:41.059 [INFO][3787] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:09:41.062018 containerd[1446]: time="2025-09-06T00:09:41.061462315Z" level=info msg="TearDown network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\" successfully" Sep 6 00:09:41.062018 containerd[1446]: time="2025-09-06T00:09:41.061499916Z" level=info msg="StopPodSandbox for \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\" returns successfully" Sep 6 00:09:41.125072 kubelet[2465]: I0906 00:09:41.124558 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-backend-key-pair\") pod \"99be312a-ebdb-437e-9c29-9b53e7c2283e\" (UID: \"99be312a-ebdb-437e-9c29-9b53e7c2283e\") " Sep 6 00:09:41.125072 kubelet[2465]: I0906 00:09:41.124597 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-ca-bundle\") pod \"99be312a-ebdb-437e-9c29-9b53e7c2283e\" (UID: \"99be312a-ebdb-437e-9c29-9b53e7c2283e\") " Sep 6 00:09:41.125072 kubelet[2465]: I0906 00:09:41.124635 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pmvg\" (UniqueName: \"kubernetes.io/projected/99be312a-ebdb-437e-9c29-9b53e7c2283e-kube-api-access-9pmvg\") pod \"99be312a-ebdb-437e-9c29-9b53e7c2283e\" (UID: \"99be312a-ebdb-437e-9c29-9b53e7c2283e\") " Sep 6 00:09:41.138718 kubelet[2465]: I0906 00:09:41.137834 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99be312a-ebdb-437e-9c29-9b53e7c2283e-kube-api-access-9pmvg" (OuterVolumeSpecName: "kube-api-access-9pmvg") pod "99be312a-ebdb-437e-9c29-9b53e7c2283e" (UID: "99be312a-ebdb-437e-9c29-9b53e7c2283e"). InnerVolumeSpecName "kube-api-access-9pmvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:09:41.139237 kubelet[2465]: I0906 00:09:41.139205 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "99be312a-ebdb-437e-9c29-9b53e7c2283e" (UID: "99be312a-ebdb-437e-9c29-9b53e7c2283e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:09:41.140314 kubelet[2465]: I0906 00:09:41.140285 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "99be312a-ebdb-437e-9c29-9b53e7c2283e" (UID: "99be312a-ebdb-437e-9c29-9b53e7c2283e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:09:41.225545 kubelet[2465]: I0906 00:09:41.225505 2465 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 6 00:09:41.225687 kubelet[2465]: I0906 00:09:41.225676 2465 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99be312a-ebdb-437e-9c29-9b53e7c2283e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 6 00:09:41.225796 kubelet[2465]: I0906 00:09:41.225784 2465 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9pmvg\" (UniqueName: \"kubernetes.io/projected/99be312a-ebdb-437e-9c29-9b53e7c2283e-kube-api-access-9pmvg\") on node \"localhost\" DevicePath \"\"" Sep 6 00:09:41.289757 systemd[1]: run-netns-cni\x2dc1781c5f\x2d929c\x2d0013\x2d181d\x2dc7cfabbcbbd6.mount: Deactivated successfully. Sep 6 00:09:41.289848 systemd[1]: var-lib-kubelet-pods-99be312a\x2debdb\x2d437e\x2d9c29\x2d9b53e7c2283e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9pmvg.mount: Deactivated successfully. Sep 6 00:09:41.289909 systemd[1]: var-lib-kubelet-pods-99be312a\x2debdb\x2d437e\x2d9c29\x2d9b53e7c2283e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 00:09:41.497442 systemd[1]: Removed slice kubepods-besteffort-pod99be312a_ebdb_437e_9c29_9b53e7c2283e.slice - libcontainer container kubepods-besteffort-pod99be312a_ebdb_437e_9c29_9b53e7c2283e.slice. Sep 6 00:09:41.626394 kubelet[2465]: E0906 00:09:41.626366 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:41.651790 kubelet[2465]: I0906 00:09:41.651716 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7hznx" podStartSLOduration=2.184146753 podStartE2EDuration="13.651698224s" podCreationTimestamp="2025-09-06 00:09:28 +0000 UTC" firstStartedPulling="2025-09-06 00:09:29.06990406 +0000 UTC m=+21.654727818" lastFinishedPulling="2025-09-06 00:09:40.537455531 +0000 UTC m=+33.122279289" observedRunningTime="2025-09-06 00:09:41.65032799 +0000 UTC m=+34.235151708" watchObservedRunningTime="2025-09-06 00:09:41.651698224 +0000 UTC m=+34.236521982" Sep 6 00:09:41.735163 systemd[1]: Created slice kubepods-besteffort-pod5c3b75b7_bfc9_4550_a771_608f22cb60da.slice - libcontainer container kubepods-besteffort-pod5c3b75b7_bfc9_4550_a771_608f22cb60da.slice. Sep 6 00:09:41.835829 kubelet[2465]: I0906 00:09:41.835698 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c3b75b7-bfc9-4550-a771-608f22cb60da-whisker-backend-key-pair\") pod \"whisker-5f8bb4c9fc-5zj2c\" (UID: \"5c3b75b7-bfc9-4550-a771-608f22cb60da\") " pod="calico-system/whisker-5f8bb4c9fc-5zj2c" Sep 6 00:09:41.835829 kubelet[2465]: I0906 00:09:41.835766 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c3b75b7-bfc9-4550-a771-608f22cb60da-whisker-ca-bundle\") pod \"whisker-5f8bb4c9fc-5zj2c\" (UID: \"5c3b75b7-bfc9-4550-a771-608f22cb60da\") " pod="calico-system/whisker-5f8bb4c9fc-5zj2c" Sep 6 00:09:41.835829 kubelet[2465]: I0906 00:09:41.835795 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkzjw\" (UniqueName: \"kubernetes.io/projected/5c3b75b7-bfc9-4550-a771-608f22cb60da-kube-api-access-rkzjw\") pod \"whisker-5f8bb4c9fc-5zj2c\" (UID: \"5c3b75b7-bfc9-4550-a771-608f22cb60da\") " pod="calico-system/whisker-5f8bb4c9fc-5zj2c" Sep 6 00:09:42.040554 containerd[1446]: time="2025-09-06T00:09:42.040070962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8bb4c9fc-5zj2c,Uid:5c3b75b7-bfc9-4550-a771-608f22cb60da,Namespace:calico-system,Attempt:0,}" Sep 6 00:09:42.215605 systemd-networkd[1380]: calic6cc0edbb7d: Link UP Sep 6 00:09:42.216538 systemd-networkd[1380]: calic6cc0edbb7d: Gained carrier Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.091 [INFO][3822] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.107 [INFO][3822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0 whisker-5f8bb4c9fc- calico-system 5c3b75b7-bfc9-4550-a771-608f22cb60da 908 0 2025-09-06 00:09:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f8bb4c9fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f8bb4c9fc-5zj2c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic6cc0edbb7d [] [] }} ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.107 [INFO][3822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.137 [INFO][3836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" HandleID="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Workload="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.137 [INFO][3836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" HandleID="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Workload="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f8bb4c9fc-5zj2c", "timestamp":"2025-09-06 00:09:42.137279076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.137 [INFO][3836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.137 [INFO][3836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.137 [INFO][3836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.154 [INFO][3836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.166 [INFO][3836] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.178 [INFO][3836] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.181 [INFO][3836] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.184 [INFO][3836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.184 [INFO][3836] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.186 [INFO][3836] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54 Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.192 [INFO][3836] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.203 [INFO][3836] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.203 [INFO][3836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" host="localhost" Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.203 [INFO][3836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:42.236021 containerd[1446]: 2025-09-06 00:09:42.203 [INFO][3836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" HandleID="k8s-pod-network.898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Workload="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" Sep 6 00:09:42.236892 containerd[1446]: 2025-09-06 00:09:42.207 [INFO][3822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0", GenerateName:"whisker-5f8bb4c9fc-", Namespace:"calico-system", SelfLink:"", UID:"5c3b75b7-bfc9-4550-a771-608f22cb60da", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f8bb4c9fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f8bb4c9fc-5zj2c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic6cc0edbb7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:42.236892 containerd[1446]: 2025-09-06 00:09:42.207 [INFO][3822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" Sep 6 00:09:42.236892 containerd[1446]: 2025-09-06 00:09:42.207 [INFO][3822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6cc0edbb7d ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" Sep 6 00:09:42.236892 containerd[1446]: 2025-09-06 00:09:42.215 [INFO][3822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" Sep 6 00:09:42.236892 containerd[1446]: 2025-09-06 00:09:42.215 [INFO][3822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0", GenerateName:"whisker-5f8bb4c9fc-", Namespace:"calico-system", SelfLink:"", UID:"5c3b75b7-bfc9-4550-a771-608f22cb60da", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f8bb4c9fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54", Pod:"whisker-5f8bb4c9fc-5zj2c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic6cc0edbb7d", MAC:"ba:93:4b:2f:ad:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:42.236892 containerd[1446]: 2025-09-06 00:09:42.231 [INFO][3822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54" Namespace="calico-system" Pod="whisker-5f8bb4c9fc-5zj2c" WorkloadEndpoint="localhost-k8s-whisker--5f8bb4c9fc--5zj2c-eth0" Sep 6 00:09:42.263482 containerd[1446]: time="2025-09-06T00:09:42.262879918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:42.263482 containerd[1446]: time="2025-09-06T00:09:42.262948200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:42.263482 containerd[1446]: time="2025-09-06T00:09:42.262963520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:42.263482 containerd[1446]: time="2025-09-06T00:09:42.263049642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:42.292123 systemd[1]: Started cri-containerd-898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54.scope - libcontainer container 898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54. Sep 6 00:09:42.324942 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:42.352519 containerd[1446]: time="2025-09-06T00:09:42.352471328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8bb4c9fc-5zj2c,Uid:5c3b75b7-bfc9-4550-a771-608f22cb60da,Namespace:calico-system,Attempt:0,} returns sandbox id \"898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54\"" Sep 6 00:09:42.355519 containerd[1446]: time="2025-09-06T00:09:42.354770823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 6 00:09:42.401780 kernel: bpftool[4018]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 6 00:09:42.565887 systemd-networkd[1380]: vxlan.calico: Link UP Sep 6 00:09:42.565896 systemd-networkd[1380]: vxlan.calico: Gained carrier Sep 6 00:09:42.629453 kubelet[2465]: I0906 00:09:42.629418 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:09:43.416324 containerd[1446]: time="2025-09-06T00:09:43.416269243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:43.417048 containerd[1446]: time="2025-09-06T00:09:43.417022940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 6 00:09:43.418027 containerd[1446]: time="2025-09-06T00:09:43.417996323Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:43.420350 containerd[1446]: time="2025-09-06T00:09:43.420313297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:43.421908 containerd[1446]: time="2025-09-06T00:09:43.421845093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.067037669s" Sep 6 00:09:43.421908 containerd[1446]: time="2025-09-06T00:09:43.421880974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 6 00:09:43.431073 containerd[1446]: time="2025-09-06T00:09:43.431029029Z" level=info msg="CreateContainer within sandbox \"898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 6 00:09:43.470962 containerd[1446]: time="2025-09-06T00:09:43.470885402Z" level=info msg="CreateContainer within sandbox \"898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a4a61d8f880228a2565331e596647324da747df2872adf6bef5a9e7c8d3c2b95\"" Sep 6 00:09:43.471558 containerd[1446]: time="2025-09-06T00:09:43.471449336Z" level=info msg="StartContainer for \"a4a61d8f880228a2565331e596647324da747df2872adf6bef5a9e7c8d3c2b95\"" Sep 6 00:09:43.493904 kubelet[2465]: I0906 00:09:43.492951 2465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99be312a-ebdb-437e-9c29-9b53e7c2283e" path="/var/lib/kubelet/pods/99be312a-ebdb-437e-9c29-9b53e7c2283e/volumes" Sep 6 00:09:43.517911 systemd[1]: Started cri-containerd-a4a61d8f880228a2565331e596647324da747df2872adf6bef5a9e7c8d3c2b95.scope - libcontainer container a4a61d8f880228a2565331e596647324da747df2872adf6bef5a9e7c8d3c2b95. Sep 6 00:09:43.549157 containerd[1446]: time="2025-09-06T00:09:43.549093635Z" level=info msg="StartContainer for \"a4a61d8f880228a2565331e596647324da747df2872adf6bef5a9e7c8d3c2b95\" returns successfully" Sep 6 00:09:43.550469 containerd[1446]: time="2025-09-06T00:09:43.550442266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 6 00:09:44.172662 systemd-networkd[1380]: calic6cc0edbb7d: Gained IPv6LL Sep 6 00:09:44.492014 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Sep 6 00:09:45.032218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814533221.mount: Deactivated successfully. Sep 6 00:09:45.064765 containerd[1446]: time="2025-09-06T00:09:45.064660197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:45.065602 containerd[1446]: time="2025-09-06T00:09:45.065246690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 6 00:09:45.066165 containerd[1446]: time="2025-09-06T00:09:45.066107949Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:45.075894 containerd[1446]: time="2025-09-06T00:09:45.075826923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:45.076693 containerd[1446]: time="2025-09-06T00:09:45.076552859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.526074431s" Sep 6 00:09:45.076693 containerd[1446]: time="2025-09-06T00:09:45.076596140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 6 00:09:45.089405 containerd[1446]: time="2025-09-06T00:09:45.089360460Z" level=info msg="CreateContainer within sandbox \"898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 6 00:09:45.101273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220030082.mount: Deactivated successfully. Sep 6 00:09:45.106078 containerd[1446]: time="2025-09-06T00:09:45.106025747Z" level=info msg="CreateContainer within sandbox \"898290028c09c4e04684a33477bb9c4a85a928f1ae58d2ac422262fe22a7ad54\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"35ed4991e0df3dcf7c49e55a891afed227850aa7b36aad189b7b44f5da124111\"" Sep 6 00:09:45.106594 containerd[1446]: time="2025-09-06T00:09:45.106567879Z" level=info msg="StartContainer for \"35ed4991e0df3dcf7c49e55a891afed227850aa7b36aad189b7b44f5da124111\"" Sep 6 00:09:45.146981 systemd[1]: Started cri-containerd-35ed4991e0df3dcf7c49e55a891afed227850aa7b36aad189b7b44f5da124111.scope - libcontainer container 35ed4991e0df3dcf7c49e55a891afed227850aa7b36aad189b7b44f5da124111. Sep 6 00:09:45.233813 containerd[1446]: time="2025-09-06T00:09:45.233765038Z" level=info msg="StartContainer for \"35ed4991e0df3dcf7c49e55a891afed227850aa7b36aad189b7b44f5da124111\" returns successfully" Sep 6 00:09:45.655843 kubelet[2465]: I0906 00:09:45.655784 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f8bb4c9fc-5zj2c" podStartSLOduration=1.932292088 podStartE2EDuration="4.655768042s" podCreationTimestamp="2025-09-06 00:09:41 +0000 UTC" firstStartedPulling="2025-09-06 00:09:42.354091927 +0000 UTC m=+34.938915645" lastFinishedPulling="2025-09-06 00:09:45.077567841 +0000 UTC m=+37.662391599" observedRunningTime="2025-09-06 00:09:45.653499352 +0000 UTC m=+38.238323111" watchObservedRunningTime="2025-09-06 00:09:45.655768042 +0000 UTC m=+38.240591800" Sep 6 00:09:48.490966 containerd[1446]: time="2025-09-06T00:09:48.490916785Z" level=info msg="StopPodSandbox for \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\"" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.534 [INFO][4212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.534 [INFO][4212] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" iface="eth0" netns="/var/run/netns/cni-d56bc1b7-68a0-91d4-92ff-e58425709667" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.534 [INFO][4212] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" iface="eth0" netns="/var/run/netns/cni-d56bc1b7-68a0-91d4-92ff-e58425709667" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.535 [INFO][4212] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" iface="eth0" netns="/var/run/netns/cni-d56bc1b7-68a0-91d4-92ff-e58425709667" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.535 [INFO][4212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.535 [INFO][4212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.556 [INFO][4221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.556 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.556 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.566 [WARNING][4221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.566 [INFO][4221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.569 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:48.572893 containerd[1446]: 2025-09-06 00:09:48.570 [INFO][4212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:09:48.573273 containerd[1446]: time="2025-09-06T00:09:48.573025362Z" level=info msg="TearDown network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\" successfully" Sep 6 00:09:48.573273 containerd[1446]: time="2025-09-06T00:09:48.573053322Z" level=info msg="StopPodSandbox for \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\" returns successfully" Sep 6 00:09:48.574072 containerd[1446]: time="2025-09-06T00:09:48.573977621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd46bb99-vsnrh,Uid:57617a63-d31f-472d-91f1-03fac276c695,Namespace:calico-system,Attempt:1,}" Sep 6 00:09:48.575201 systemd[1]: run-netns-cni\x2dd56bc1b7\x2d68a0\x2d91d4\x2d92ff\x2de58425709667.mount: Deactivated successfully. Sep 6 00:09:48.674662 systemd-networkd[1380]: calid58199084d0: Link UP Sep 6 00:09:48.675303 systemd-networkd[1380]: calid58199084d0: Gained carrier Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.616 [INFO][4231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0 calico-kube-controllers-5cd46bb99- calico-system 57617a63-d31f-472d-91f1-03fac276c695 941 0 2025-09-06 00:09:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cd46bb99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5cd46bb99-vsnrh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid58199084d0 [] [] }} ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.616 [INFO][4231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.637 [INFO][4244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" HandleID="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.637 [INFO][4244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" HandleID="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5cd46bb99-vsnrh", "timestamp":"2025-09-06 00:09:48.637456422 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.637 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.637 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.637 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.647 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.652 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.656 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.658 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.660 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.660 [INFO][4244] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.662 [INFO][4244] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.665 [INFO][4244] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.670 [INFO][4244] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.670 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" host="localhost" Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.670 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:48.690557 containerd[1446]: 2025-09-06 00:09:48.670 [INFO][4244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" HandleID="k8s-pod-network.6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.691138 containerd[1446]: 2025-09-06 00:09:48.672 [INFO][4231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0", GenerateName:"calico-kube-controllers-5cd46bb99-", Namespace:"calico-system", SelfLink:"", UID:"57617a63-d31f-472d-91f1-03fac276c695", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd46bb99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5cd46bb99-vsnrh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid58199084d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:48.691138 containerd[1446]: 2025-09-06 00:09:48.672 [INFO][4231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.691138 containerd[1446]: 2025-09-06 00:09:48.672 [INFO][4231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid58199084d0 ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.691138 containerd[1446]: 2025-09-06 00:09:48.674 [INFO][4231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.691138 containerd[1446]: 2025-09-06 00:09:48.675 [INFO][4231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0", GenerateName:"calico-kube-controllers-5cd46bb99-", Namespace:"calico-system", SelfLink:"", UID:"57617a63-d31f-472d-91f1-03fac276c695", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd46bb99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d", Pod:"calico-kube-controllers-5cd46bb99-vsnrh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid58199084d0", MAC:"be:15:b3:6e:67:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:48.691138 containerd[1446]: 2025-09-06 00:09:48.688 [INFO][4231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d" Namespace="calico-system" Pod="calico-kube-controllers-5cd46bb99-vsnrh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:09:48.706472 containerd[1446]: time="2025-09-06T00:09:48.706318611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:48.706600 containerd[1446]: time="2025-09-06T00:09:48.706496615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:48.706600 containerd[1446]: time="2025-09-06T00:09:48.706546696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:48.707149 containerd[1446]: time="2025-09-06T00:09:48.707104307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:48.728890 systemd[1]: Started cri-containerd-6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d.scope - libcontainer container 6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d. Sep 6 00:09:48.739307 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:48.755670 containerd[1446]: time="2025-09-06T00:09:48.755356440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd46bb99-vsnrh,Uid:57617a63-d31f-472d-91f1-03fac276c695,Namespace:calico-system,Attempt:1,} returns sandbox id \"6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d\"" Sep 6 00:09:48.757922 containerd[1446]: time="2025-09-06T00:09:48.757888892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 00:09:49.491593 containerd[1446]: time="2025-09-06T00:09:49.491548713Z" level=info msg="StopPodSandbox for \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\"" Sep 6 00:09:49.492503 containerd[1446]: time="2025-09-06T00:09:49.491576433Z" level=info msg="StopPodSandbox for \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\"" Sep 6 00:09:49.494403 containerd[1446]: time="2025-09-06T00:09:49.491634875Z" level=info msg="StopPodSandbox for \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\"" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.557 [INFO][4337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.557 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" iface="eth0" netns="/var/run/netns/cni-b62769c9-0ac8-2f56-cb34-079d75f1336b" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.557 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" iface="eth0" netns="/var/run/netns/cni-b62769c9-0ac8-2f56-cb34-079d75f1336b" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.559 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" iface="eth0" netns="/var/run/netns/cni-b62769c9-0ac8-2f56-cb34-079d75f1336b" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.559 [INFO][4337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.559 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.591 [INFO][4360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.591 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.591 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.603 [WARNING][4360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.603 [INFO][4360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.604 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:49.612949 containerd[1446]: 2025-09-06 00:09:49.609 [INFO][4337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:09:49.614161 containerd[1446]: time="2025-09-06T00:09:49.614120080Z" level=info msg="TearDown network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\" successfully" Sep 6 00:09:49.614161 containerd[1446]: time="2025-09-06T00:09:49.614156921Z" level=info msg="StopPodSandbox for \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\" returns successfully" Sep 6 00:09:49.614631 systemd[1]: run-netns-cni\x2db62769c9\x2d0ac8\x2d2f56\x2dcb34\x2d079d75f1336b.mount: Deactivated successfully. Sep 6 00:09:49.614987 kubelet[2465]: E0906 00:09:49.614897 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:49.615554 containerd[1446]: time="2025-09-06T00:09:49.615495867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msd89,Uid:74911617-3e52-41e1-a84e-8da0e31464f5,Namespace:kube-system,Attempt:1,}" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.570 [INFO][4347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.570 [INFO][4347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" iface="eth0" netns="/var/run/netns/cni-c494f523-6e25-57f7-2f7a-8b3579af661f" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.570 [INFO][4347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" iface="eth0" netns="/var/run/netns/cni-c494f523-6e25-57f7-2f7a-8b3579af661f" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.571 [INFO][4347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" iface="eth0" netns="/var/run/netns/cni-c494f523-6e25-57f7-2f7a-8b3579af661f" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.571 [INFO][4347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.571 [INFO][4347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.598 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.599 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.604 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.617 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.617 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.619 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:49.625838 containerd[1446]: 2025-09-06 00:09:49.621 [INFO][4347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:09:49.628639 systemd[1]: run-netns-cni\x2dc494f523\x2d6e25\x2d57f7\x2d2f7a\x2d8b3579af661f.mount: Deactivated successfully. Sep 6 00:09:49.629664 containerd[1446]: time="2025-09-06T00:09:49.629385780Z" level=info msg="TearDown network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\" successfully" Sep 6 00:09:49.629664 containerd[1446]: time="2025-09-06T00:09:49.629413621Z" level=info msg="StopPodSandbox for \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\" returns successfully" Sep 6 00:09:49.631114 containerd[1446]: time="2025-09-06T00:09:49.630214237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2plj,Uid:20b69192-52d9-4abd-8a15-9ae7c5a2f6fb,Namespace:calico-system,Attempt:1,}" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.579 [INFO][4326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.579 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" iface="eth0" netns="/var/run/netns/cni-ff155d8a-af08-17cc-23f9-94b8b070727f" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.579 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" iface="eth0" netns="/var/run/netns/cni-ff155d8a-af08-17cc-23f9-94b8b070727f" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.580 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" iface="eth0" netns="/var/run/netns/cni-ff155d8a-af08-17cc-23f9-94b8b070727f" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.580 [INFO][4326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.580 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.622 [INFO][4377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.622 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.622 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.634 [WARNING][4377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.634 [INFO][4377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.636 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:49.646681 containerd[1446]: 2025-09-06 00:09:49.640 [INFO][4326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:09:49.648182 systemd[1]: run-netns-cni\x2dff155d8a\x2daf08\x2d17cc\x2d23f9\x2d94b8b070727f.mount: Deactivated successfully. Sep 6 00:09:49.652901 containerd[1446]: time="2025-09-06T00:09:49.649345652Z" level=info msg="TearDown network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\" successfully" Sep 6 00:09:49.652901 containerd[1446]: time="2025-09-06T00:09:49.649373773Z" level=info msg="StopPodSandbox for \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\" returns successfully" Sep 6 00:09:49.655105 containerd[1446]: time="2025-09-06T00:09:49.655066525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-87cm2,Uid:28c61605-48bf-420c-9e67-dd95e49a735f,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:09:49.783552 systemd-networkd[1380]: calide4cf757f26: Link UP Sep 6 00:09:49.786723 systemd-networkd[1380]: calide4cf757f26: Gained carrier Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.688 [INFO][4386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--msd89-eth0 coredns-674b8bbfcf- kube-system 74911617-3e52-41e1-a84e-8da0e31464f5 952 0 2025-09-06 00:09:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-msd89 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide4cf757f26 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.688 [INFO][4386] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.717 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" HandleID="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.717 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" HandleID="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-msd89", "timestamp":"2025-09-06 00:09:49.717680555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.717 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.718 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.718 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.727 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.740 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.751 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.753 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.757 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.757 [INFO][4429] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.758 [INFO][4429] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2 Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.764 [INFO][4429] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.769 [INFO][4429] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.769 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" host="localhost" Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.769 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:49.802261 containerd[1446]: 2025-09-06 00:09:49.769 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" HandleID="k8s-pod-network.d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.804067 containerd[1446]: 2025-09-06 00:09:49.778 [INFO][4386] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msd89-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74911617-3e52-41e1-a84e-8da0e31464f5", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-msd89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide4cf757f26", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:49.804067 containerd[1446]: 2025-09-06 00:09:49.778 [INFO][4386] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.804067 containerd[1446]: 2025-09-06 00:09:49.778 [INFO][4386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide4cf757f26 ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.804067 containerd[1446]: 2025-09-06 00:09:49.785 [INFO][4386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.804067 containerd[1446]: 2025-09-06 00:09:49.787 [INFO][4386] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msd89-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74911617-3e52-41e1-a84e-8da0e31464f5", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2", Pod:"coredns-674b8bbfcf-msd89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide4cf757f26", MAC:"26:4e:93:fe:b5:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:49.804067 containerd[1446]: 2025-09-06 00:09:49.799 [INFO][4386] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2" Namespace="kube-system" Pod="coredns-674b8bbfcf-msd89" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:09:49.823178 containerd[1446]: time="2025-09-06T00:09:49.823074705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:49.823763 containerd[1446]: time="2025-09-06T00:09:49.823514633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:49.823763 containerd[1446]: time="2025-09-06T00:09:49.823542714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:49.823763 containerd[1446]: time="2025-09-06T00:09:49.823637876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:49.846245 systemd[1]: Started cri-containerd-d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2.scope - libcontainer container d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2. Sep 6 00:09:49.861014 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:49.879496 systemd-networkd[1380]: cali0067f3b4759: Link UP Sep 6 00:09:49.882933 systemd-networkd[1380]: cali0067f3b4759: Gained carrier Sep 6 00:09:49.930288 containerd[1446]: time="2025-09-06T00:09:49.930239490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msd89,Uid:74911617-3e52-41e1-a84e-8da0e31464f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2\"" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.730 [INFO][4410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0 calico-apiserver-57978c77c8- calico-apiserver 28c61605-48bf-420c-9e67-dd95e49a735f 954 0 2025-09-06 00:09:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57978c77c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57978c77c8-87cm2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0067f3b4759 [] [] }} ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.730 [INFO][4410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.768 [INFO][4443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" HandleID="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.768 [INFO][4443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" HandleID="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b9640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57978c77c8-87cm2", "timestamp":"2025-09-06 00:09:49.768179747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.768 [INFO][4443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.769 [INFO][4443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.769 [INFO][4443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.829 [INFO][4443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.840 [INFO][4443] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.851 [INFO][4443] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.853 [INFO][4443] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.855 [INFO][4443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.855 [INFO][4443] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.856 [INFO][4443] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6 Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.861 [INFO][4443] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.870 [INFO][4443] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.870 [INFO][4443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" host="localhost" Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.870 [INFO][4443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:49.930591 containerd[1446]: 2025-09-06 00:09:49.870 [INFO][4443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" HandleID="k8s-pod-network.b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.931529 containerd[1446]: 2025-09-06 00:09:49.875 [INFO][4410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"28c61605-48bf-420c-9e67-dd95e49a735f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57978c77c8-87cm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0067f3b4759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:49.931529 containerd[1446]: 2025-09-06 00:09:49.876 [INFO][4410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.931529 containerd[1446]: 2025-09-06 00:09:49.876 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0067f3b4759 ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.931529 containerd[1446]: 2025-09-06 00:09:49.882 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.931529 containerd[1446]: 2025-09-06 00:09:49.882 [INFO][4410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"28c61605-48bf-420c-9e67-dd95e49a735f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6", Pod:"calico-apiserver-57978c77c8-87cm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0067f3b4759", MAC:"a6:69:80:32:f9:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:49.931529 containerd[1446]: 2025-09-06 00:09:49.922 [INFO][4410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-87cm2" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:09:49.933511 kubelet[2465]: E0906 00:09:49.933480 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:49.942183 containerd[1446]: time="2025-09-06T00:09:49.942149124Z" level=info msg="CreateContainer within sandbox \"d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:09:49.956309 containerd[1446]: time="2025-09-06T00:09:49.955824712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:49.956309 containerd[1446]: time="2025-09-06T00:09:49.955931434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:49.956309 containerd[1446]: time="2025-09-06T00:09:49.955947355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:49.957378 containerd[1446]: time="2025-09-06T00:09:49.957275701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:49.957590 containerd[1446]: time="2025-09-06T00:09:49.957557906Z" level=info msg="CreateContainer within sandbox \"d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"730d4cecb69b591449343f980b78470185c63403f639fd2996c5827b7fbef95a\"" Sep 6 00:09:49.959012 containerd[1446]: time="2025-09-06T00:09:49.958947094Z" level=info msg="StartContainer for \"730d4cecb69b591449343f980b78470185c63403f639fd2996c5827b7fbef95a\"" Sep 6 00:09:49.976428 systemd-networkd[1380]: cali8a96f891182: Link UP Sep 6 00:09:49.977225 systemd-networkd[1380]: cali8a96f891182: Gained carrier Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.726 [INFO][4396] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--r2plj-eth0 csi-node-driver- calico-system 20b69192-52d9-4abd-8a15-9ae7c5a2f6fb 953 0 2025-09-06 00:09:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-r2plj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8a96f891182 [] [] }} ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.726 [INFO][4396] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.790 [INFO][4444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" HandleID="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.790 [INFO][4444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" HandleID="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3d20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-r2plj", "timestamp":"2025-09-06 00:09:49.790166778 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.790 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.870 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.870 [INFO][4444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.929 [INFO][4444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.940 [INFO][4444] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.950 [INFO][4444] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.953 [INFO][4444] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.955 [INFO][4444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.955 [INFO][4444] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.957 [INFO][4444] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164 Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.963 [INFO][4444] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.970 [INFO][4444] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.970 [INFO][4444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" host="localhost" Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.970 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:49.996424 containerd[1446]: 2025-09-06 00:09:49.970 [INFO][4444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" HandleID="k8s-pod-network.8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.996950 containerd[1446]: 2025-09-06 00:09:49.974 [INFO][4396] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r2plj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-r2plj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8a96f891182", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:49.996950 containerd[1446]: 2025-09-06 00:09:49.974 [INFO][4396] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.996950 containerd[1446]: 2025-09-06 00:09:49.974 [INFO][4396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a96f891182 ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.996950 containerd[1446]: 2025-09-06 00:09:49.976 [INFO][4396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:49.996950 containerd[1446]: 2025-09-06 00:09:49.977 [INFO][4396] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r2plj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164", Pod:"csi-node-driver-r2plj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8a96f891182", MAC:"0e:0c:f5:57:a1:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:49.996950 containerd[1446]: 2025-09-06 00:09:49.991 [INFO][4396] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164" Namespace="calico-system" Pod="csi-node-driver-r2plj" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:09:50.017476 systemd[1]: Started cri-containerd-730d4cecb69b591449343f980b78470185c63403f639fd2996c5827b7fbef95a.scope - libcontainer container 730d4cecb69b591449343f980b78470185c63403f639fd2996c5827b7fbef95a. Sep 6 00:09:50.024860 containerd[1446]: time="2025-09-06T00:09:50.024772535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:50.024860 containerd[1446]: time="2025-09-06T00:09:50.024833536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:50.025054 containerd[1446]: time="2025-09-06T00:09:50.024844136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:50.025186 containerd[1446]: time="2025-09-06T00:09:50.025154502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:50.028318 systemd[1]: Started cri-containerd-b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6.scope - libcontainer container b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6. Sep 6 00:09:50.046739 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:50.051915 systemd[1]: Started cri-containerd-8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164.scope - libcontainer container 8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164. Sep 6 00:09:50.054029 containerd[1446]: time="2025-09-06T00:09:50.053942973Z" level=info msg="StartContainer for \"730d4cecb69b591449343f980b78470185c63403f639fd2996c5827b7fbef95a\" returns successfully" Sep 6 00:09:50.089294 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:50.096156 containerd[1446]: time="2025-09-06T00:09:50.095446808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-87cm2,Uid:28c61605-48bf-420c-9e67-dd95e49a735f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6\"" Sep 6 00:09:50.119984 containerd[1446]: time="2025-09-06T00:09:50.119944197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2plj,Uid:20b69192-52d9-4abd-8a15-9ae7c5a2f6fb,Namespace:calico-system,Attempt:1,} returns sandbox id \"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164\"" Sep 6 00:09:50.520875 containerd[1446]: time="2025-09-06T00:09:50.520315341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:50.521301 containerd[1446]: time="2025-09-06T00:09:50.521039555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 6 00:09:50.522803 containerd[1446]: time="2025-09-06T00:09:50.522457662Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:50.525406 containerd[1446]: time="2025-09-06T00:09:50.525377318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:50.526015 containerd[1446]: time="2025-09-06T00:09:50.525971729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.768040517s" Sep 6 00:09:50.526015 containerd[1446]: time="2025-09-06T00:09:50.526007650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 6 00:09:50.527127 containerd[1446]: time="2025-09-06T00:09:50.527098310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:09:50.541975 containerd[1446]: time="2025-09-06T00:09:50.541929234Z" level=info msg="CreateContainer within sandbox \"6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 00:09:50.556997 containerd[1446]: time="2025-09-06T00:09:50.556884361Z" level=info msg="CreateContainer within sandbox \"6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3d4ddd8842e92ee38d3cde37baa8a9bb76bd662920f4dea4341ab3a76ccc5944\"" Sep 6 00:09:50.558598 containerd[1446]: time="2025-09-06T00:09:50.557693816Z" level=info msg="StartContainer for \"3d4ddd8842e92ee38d3cde37baa8a9bb76bd662920f4dea4341ab3a76ccc5944\"" Sep 6 00:09:50.593893 systemd[1]: Started cri-containerd-3d4ddd8842e92ee38d3cde37baa8a9bb76bd662920f4dea4341ab3a76ccc5944.scope - libcontainer container 3d4ddd8842e92ee38d3cde37baa8a9bb76bd662920f4dea4341ab3a76ccc5944. Sep 6 00:09:50.629290 containerd[1446]: time="2025-09-06T00:09:50.629194665Z" level=info msg="StartContainer for \"3d4ddd8842e92ee38d3cde37baa8a9bb76bd662920f4dea4341ab3a76ccc5944\" returns successfully" Sep 6 00:09:50.688866 kubelet[2465]: E0906 00:09:50.688143 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:50.701207 systemd-networkd[1380]: calid58199084d0: Gained IPv6LL Sep 6 00:09:50.705507 kubelet[2465]: I0906 00:09:50.705446 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-msd89" podStartSLOduration=36.705426564 podStartE2EDuration="36.705426564s" podCreationTimestamp="2025-09-06 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:09:50.700430428 +0000 UTC m=+43.285254186" watchObservedRunningTime="2025-09-06 00:09:50.705426564 +0000 UTC m=+43.290250322" Sep 6 00:09:50.728850 kubelet[2465]: I0906 00:09:50.728683 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cd46bb99-vsnrh" podStartSLOduration=19.959187985 podStartE2EDuration="21.728665649s" podCreationTimestamp="2025-09-06 00:09:29 +0000 UTC" firstStartedPulling="2025-09-06 00:09:48.757503284 +0000 UTC m=+41.342327042" lastFinishedPulling="2025-09-06 00:09:50.526980948 +0000 UTC m=+43.111804706" observedRunningTime="2025-09-06 00:09:50.728287642 +0000 UTC m=+43.313111400" watchObservedRunningTime="2025-09-06 00:09:50.728665649 +0000 UTC m=+43.313489407" Sep 6 00:09:51.492337 containerd[1446]: time="2025-09-06T00:09:51.491955790Z" level=info msg="StopPodSandbox for \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\"" Sep 6 00:09:51.492337 containerd[1446]: time="2025-09-06T00:09:51.492302556Z" level=info msg="StopPodSandbox for \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\"" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.545 [INFO][4730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.545 [INFO][4730] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" iface="eth0" netns="/var/run/netns/cni-2de21235-4397-cb86-db03-6075d4521adb" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.546 [INFO][4730] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" iface="eth0" netns="/var/run/netns/cni-2de21235-4397-cb86-db03-6075d4521adb" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.546 [INFO][4730] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" iface="eth0" netns="/var/run/netns/cni-2de21235-4397-cb86-db03-6075d4521adb" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.546 [INFO][4730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.546 [INFO][4730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.574 [INFO][4747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.574 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.574 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.585 [WARNING][4747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.585 [INFO][4747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.587 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:51.591247 containerd[1446]: 2025-09-06 00:09:51.588 [INFO][4730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:09:51.591853 containerd[1446]: time="2025-09-06T00:09:51.591398766Z" level=info msg="TearDown network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\" successfully" Sep 6 00:09:51.591853 containerd[1446]: time="2025-09-06T00:09:51.591427207Z" level=info msg="StopPodSandbox for \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\" returns successfully" Sep 6 00:09:51.592225 kubelet[2465]: E0906 00:09:51.592174 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:51.592534 containerd[1446]: time="2025-09-06T00:09:51.592504427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jn4fk,Uid:8fe42dcf-c7fe-4f83-b5d8-07f73b02320a,Namespace:kube-system,Attempt:1,}" Sep 6 00:09:51.594300 systemd[1]: run-netns-cni\x2d2de21235\x2d4397\x2dcb86\x2ddb03\x2d6075d4521adb.mount: Deactivated successfully. Sep 6 00:09:51.595902 systemd-networkd[1380]: calide4cf757f26: Gained IPv6LL Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.558 [INFO][4731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.559 [INFO][4731] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" iface="eth0" netns="/var/run/netns/cni-200a26c7-8dac-2ee9-c5f0-f4d6021077d2" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.560 [INFO][4731] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" iface="eth0" netns="/var/run/netns/cni-200a26c7-8dac-2ee9-c5f0-f4d6021077d2" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.560 [INFO][4731] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" iface="eth0" netns="/var/run/netns/cni-200a26c7-8dac-2ee9-c5f0-f4d6021077d2" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.560 [INFO][4731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.560 [INFO][4731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.584 [INFO][4755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.584 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.587 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.603 [WARNING][4755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.603 [INFO][4755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.606 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:51.611568 containerd[1446]: 2025-09-06 00:09:51.608 [INFO][4731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:09:51.611929 containerd[1446]: time="2025-09-06T00:09:51.611794147Z" level=info msg="TearDown network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\" successfully" Sep 6 00:09:51.611929 containerd[1446]: time="2025-09-06T00:09:51.611817268Z" level=info msg="StopPodSandbox for \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\" returns successfully" Sep 6 00:09:51.613278 containerd[1446]: time="2025-09-06T00:09:51.613245374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-2bgst,Uid:310f0b3e-872d-47ec-bab7-107416283ff9,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:09:51.615122 systemd[1]: run-netns-cni\x2d200a26c7\x2d8dac\x2d2ee9\x2dc5f0\x2df4d6021077d2.mount: Deactivated successfully. Sep 6 00:09:51.661772 systemd-networkd[1380]: cali8a96f891182: Gained IPv6LL Sep 6 00:09:51.718189 kubelet[2465]: E0906 00:09:51.718147 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:51.719841 kubelet[2465]: I0906 00:09:51.718256 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:09:51.754615 systemd-networkd[1380]: califad2a8044f1: Link UP Sep 6 00:09:51.755629 systemd-networkd[1380]: califad2a8044f1: Gained carrier Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.672 [INFO][4764] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0 coredns-674b8bbfcf- kube-system 8fe42dcf-c7fe-4f83-b5d8-07f73b02320a 991 0 2025-09-06 00:09:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jn4fk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califad2a8044f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.672 [INFO][4764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.712 [INFO][4791] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" HandleID="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.712 [INFO][4791] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" HandleID="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004941f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jn4fk", "timestamp":"2025-09-06 00:09:51.712321544 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.712 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.712 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.712 [INFO][4791] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.721 [INFO][4791] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.725 [INFO][4791] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.729 [INFO][4791] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.731 [INFO][4791] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.734 [INFO][4791] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.734 [INFO][4791] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.738 [INFO][4791] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3 Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.742 [INFO][4791] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.747 [INFO][4791] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.748 [INFO][4791] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" host="localhost" Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.748 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:51.776858 containerd[1446]: 2025-09-06 00:09:51.748 [INFO][4791] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" HandleID="k8s-pod-network.5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.777706 containerd[1446]: 2025-09-06 00:09:51.751 [INFO][4764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jn4fk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad2a8044f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:51.777706 containerd[1446]: 2025-09-06 00:09:51.751 [INFO][4764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.777706 containerd[1446]: 2025-09-06 00:09:51.751 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califad2a8044f1 ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.777706 containerd[1446]: 2025-09-06 00:09:51.754 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.777706 containerd[1446]: 2025-09-06 00:09:51.755 [INFO][4764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3", Pod:"coredns-674b8bbfcf-jn4fk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad2a8044f1", MAC:"16:e9:5b:e8:b9:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:51.777706 containerd[1446]: 2025-09-06 00:09:51.768 [INFO][4764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jn4fk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:09:51.835043 containerd[1446]: time="2025-09-06T00:09:51.834937794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:51.835043 containerd[1446]: time="2025-09-06T00:09:51.835009675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:51.835281 containerd[1446]: time="2025-09-06T00:09:51.835024356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:51.835281 containerd[1446]: time="2025-09-06T00:09:51.835119317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:51.865897 systemd[1]: Started cri-containerd-5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3.scope - libcontainer container 5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3. Sep 6 00:09:51.867403 systemd-networkd[1380]: caliaae0e8814f5: Link UP Sep 6 00:09:51.868259 systemd-networkd[1380]: caliaae0e8814f5: Gained carrier Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.675 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0 calico-apiserver-57978c77c8- calico-apiserver 310f0b3e-872d-47ec-bab7-107416283ff9 992 0 2025-09-06 00:09:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57978c77c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57978c77c8-2bgst eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaae0e8814f5 [] [] }} ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.675 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.717 [INFO][4793] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" HandleID="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.717 [INFO][4793] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" HandleID="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011c8a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57978c77c8-2bgst", "timestamp":"2025-09-06 00:09:51.717266597 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.717 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.748 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.748 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.822 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.827 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.832 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.835 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.838 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.838 [INFO][4793] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.841 [INFO][4793] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.845 [INFO][4793] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.850 [INFO][4793] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.850 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" host="localhost" Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.850 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:51.886440 containerd[1446]: 2025-09-06 00:09:51.850 [INFO][4793] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" HandleID="k8s-pod-network.54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.887087 containerd[1446]: 2025-09-06 00:09:51.864 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"310f0b3e-872d-47ec-bab7-107416283ff9", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57978c77c8-2bgst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaae0e8814f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:51.887087 containerd[1446]: 2025-09-06 00:09:51.864 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.887087 containerd[1446]: 2025-09-06 00:09:51.864 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaae0e8814f5 ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.887087 containerd[1446]: 2025-09-06 00:09:51.868 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.887087 containerd[1446]: 2025-09-06 00:09:51.869 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"310f0b3e-872d-47ec-bab7-107416283ff9", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a", Pod:"calico-apiserver-57978c77c8-2bgst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaae0e8814f5", MAC:"92:5e:c7:c9:2d:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:51.887087 containerd[1446]: 2025-09-06 00:09:51.878 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a" Namespace="calico-apiserver" Pod="calico-apiserver-57978c77c8-2bgst" WorkloadEndpoint="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:09:51.890117 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:51.910279 containerd[1446]: time="2025-09-06T00:09:51.910108078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jn4fk,Uid:8fe42dcf-c7fe-4f83-b5d8-07f73b02320a,Namespace:kube-system,Attempt:1,} returns sandbox id \"5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3\"" Sep 6 00:09:51.910876 containerd[1446]: time="2025-09-06T00:09:51.910532565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:51.910876 containerd[1446]: time="2025-09-06T00:09:51.910578846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:51.910876 containerd[1446]: time="2025-09-06T00:09:51.910588887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:51.910876 containerd[1446]: time="2025-09-06T00:09:51.910665808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:51.911978 kubelet[2465]: E0906 00:09:51.911948 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:51.916286 containerd[1446]: time="2025-09-06T00:09:51.916203751Z" level=info msg="CreateContainer within sandbox \"5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:09:51.916525 systemd-networkd[1380]: cali0067f3b4759: Gained IPv6LL Sep 6 00:09:51.931947 systemd[1]: Started cri-containerd-54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a.scope - libcontainer container 54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a. Sep 6 00:09:51.945154 containerd[1446]: time="2025-09-06T00:09:51.945109451Z" level=info msg="CreateContainer within sandbox \"5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77be56936e9dae092917e2754bacb1e34f2a2831cf02c5da15ed17ec107780d8\"" Sep 6 00:09:51.947377 containerd[1446]: time="2025-09-06T00:09:51.946870324Z" level=info msg="StartContainer for \"77be56936e9dae092917e2754bacb1e34f2a2831cf02c5da15ed17ec107780d8\"" Sep 6 00:09:51.957941 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:51.987623 containerd[1446]: time="2025-09-06T00:09:51.987582764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57978c77c8-2bgst,Uid:310f0b3e-872d-47ec-bab7-107416283ff9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a\"" Sep 6 00:09:52.013940 systemd[1]: Started cri-containerd-77be56936e9dae092917e2754bacb1e34f2a2831cf02c5da15ed17ec107780d8.scope - libcontainer container 77be56936e9dae092917e2754bacb1e34f2a2831cf02c5da15ed17ec107780d8. Sep 6 00:09:52.042628 containerd[1446]: time="2025-09-06T00:09:52.042588893Z" level=info msg="StartContainer for \"77be56936e9dae092917e2754bacb1e34f2a2831cf02c5da15ed17ec107780d8\" returns successfully" Sep 6 00:09:52.386657 containerd[1446]: time="2025-09-06T00:09:52.386198718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:52.388373 containerd[1446]: time="2025-09-06T00:09:52.388344757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 6 00:09:52.389458 containerd[1446]: time="2025-09-06T00:09:52.389434897Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:52.392359 containerd[1446]: time="2025-09-06T00:09:52.392302389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:52.393051 containerd[1446]: time="2025-09-06T00:09:52.393017322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.865885611s" Sep 6 00:09:52.393051 containerd[1446]: time="2025-09-06T00:09:52.393051403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 00:09:52.395342 containerd[1446]: time="2025-09-06T00:09:52.395267003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 00:09:52.400916 containerd[1446]: time="2025-09-06T00:09:52.400875026Z" level=info msg="CreateContainer within sandbox \"b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:09:52.417797 containerd[1446]: time="2025-09-06T00:09:52.417756053Z" level=info msg="CreateContainer within sandbox \"b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25b3d131a48aa62ccd02dca297a5ae3b760bd75859c4fd967fea9fe4293c903b\"" Sep 6 00:09:52.418965 containerd[1446]: time="2025-09-06T00:09:52.418339864Z" level=info msg="StartContainer for \"25b3d131a48aa62ccd02dca297a5ae3b760bd75859c4fd967fea9fe4293c903b\"" Sep 6 00:09:52.446912 systemd[1]: Started cri-containerd-25b3d131a48aa62ccd02dca297a5ae3b760bd75859c4fd967fea9fe4293c903b.scope - libcontainer container 25b3d131a48aa62ccd02dca297a5ae3b760bd75859c4fd967fea9fe4293c903b. Sep 6 00:09:52.477104 containerd[1446]: time="2025-09-06T00:09:52.477063495Z" level=info msg="StartContainer for \"25b3d131a48aa62ccd02dca297a5ae3b760bd75859c4fd967fea9fe4293c903b\" returns successfully" Sep 6 00:09:52.491204 containerd[1446]: time="2025-09-06T00:09:52.491113911Z" level=info msg="StopPodSandbox for \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\"" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.535 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.536 [INFO][4999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" iface="eth0" netns="/var/run/netns/cni-8b54d0a2-8ed7-9a80-edae-2c4355134d70" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.536 [INFO][4999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" iface="eth0" netns="/var/run/netns/cni-8b54d0a2-8ed7-9a80-edae-2c4355134d70" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.537 [INFO][4999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" iface="eth0" netns="/var/run/netns/cni-8b54d0a2-8ed7-9a80-edae-2c4355134d70" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.537 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.537 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.561 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.561 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.561 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.571 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.571 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.572 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:52.576443 containerd[1446]: 2025-09-06 00:09:52.574 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:09:52.577622 containerd[1446]: time="2025-09-06T00:09:52.576600990Z" level=info msg="TearDown network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\" successfully" Sep 6 00:09:52.577622 containerd[1446]: time="2025-09-06T00:09:52.576632950Z" level=info msg="StopPodSandbox for \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\" returns successfully" Sep 6 00:09:52.577622 containerd[1446]: time="2025-09-06T00:09:52.577276642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cdhv4,Uid:b270af4a-cf4b-43ff-ae1a-ec07098c6468,Namespace:calico-system,Attempt:1,}" Sep 6 00:09:52.598517 systemd[1]: run-netns-cni\x2d8b54d0a2\x2d8ed7\x2d9a80\x2dedae\x2d2c4355134d70.mount: Deactivated successfully. Sep 6 00:09:52.703832 systemd-networkd[1380]: calie7d6167f246: Link UP Sep 6 00:09:52.705089 systemd-networkd[1380]: calie7d6167f246: Gained carrier Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.629 [INFO][5019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--cdhv4-eth0 goldmane-54d579b49d- calico-system b270af4a-cf4b-43ff-ae1a-ec07098c6468 1011 0 2025-09-06 00:09:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-cdhv4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie7d6167f246 [] [] }} ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.630 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.653 [INFO][5034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" HandleID="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.654 [INFO][5034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" HandleID="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-cdhv4", "timestamp":"2025-09-06 00:09:52.65396152 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.654 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.654 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.654 [INFO][5034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.663 [INFO][5034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.670 [INFO][5034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.676 [INFO][5034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.679 [INFO][5034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.683 [INFO][5034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.683 [INFO][5034] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.687 [INFO][5034] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9 Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.693 [INFO][5034] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.700 [INFO][5034] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.700 [INFO][5034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" host="localhost" Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.700 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:09:52.735903 containerd[1446]: 2025-09-06 00:09:52.700 [INFO][5034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" HandleID="k8s-pod-network.76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.736614 containerd[1446]: 2025-09-06 00:09:52.702 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--cdhv4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b270af4a-cf4b-43ff-ae1a-ec07098c6468", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-cdhv4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie7d6167f246", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:52.736614 containerd[1446]: 2025-09-06 00:09:52.702 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.736614 containerd[1446]: 2025-09-06 00:09:52.702 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7d6167f246 ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.736614 containerd[1446]: 2025-09-06 00:09:52.704 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.736614 containerd[1446]: 2025-09-06 00:09:52.705 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--cdhv4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b270af4a-cf4b-43ff-ae1a-ec07098c6468", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9", Pod:"goldmane-54d579b49d-cdhv4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie7d6167f246", MAC:"32:ed:87:41:c7:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:09:52.736614 containerd[1446]: 2025-09-06 00:09:52.732 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9" Namespace="calico-system" Pod="goldmane-54d579b49d-cdhv4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:09:52.740059 kubelet[2465]: E0906 00:09:52.739494 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:52.751082 kubelet[2465]: E0906 00:09:52.751050 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:52.780268 kubelet[2465]: I0906 00:09:52.779538 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57978c77c8-87cm2" podStartSLOduration=26.482518346 podStartE2EDuration="28.779519089s" podCreationTimestamp="2025-09-06 00:09:24 +0000 UTC" firstStartedPulling="2025-09-06 00:09:50.097231082 +0000 UTC m=+42.682054840" lastFinishedPulling="2025-09-06 00:09:52.394231825 +0000 UTC m=+44.979055583" observedRunningTime="2025-09-06 00:09:52.779069081 +0000 UTC m=+45.363892839" watchObservedRunningTime="2025-09-06 00:09:52.779519089 +0000 UTC m=+45.364342847" Sep 6 00:09:52.780268 kubelet[2465]: I0906 00:09:52.779842 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jn4fk" podStartSLOduration=38.779833975 podStartE2EDuration="38.779833975s" podCreationTimestamp="2025-09-06 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:09:52.761868847 +0000 UTC m=+45.346692605" watchObservedRunningTime="2025-09-06 00:09:52.779833975 +0000 UTC m=+45.364657773" Sep 6 00:09:52.816692 containerd[1446]: time="2025-09-06T00:09:52.816541204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:09:52.816910 containerd[1446]: time="2025-09-06T00:09:52.816673287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:09:52.817206 containerd[1446]: time="2025-09-06T00:09:52.816897091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:52.817476 containerd[1446]: time="2025-09-06T00:09:52.817420940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:09:52.844966 systemd[1]: Started cri-containerd-76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9.scope - libcontainer container 76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9. Sep 6 00:09:52.861250 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:09:52.877617 containerd[1446]: time="2025-09-06T00:09:52.877580397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cdhv4,Uid:b270af4a-cf4b-43ff-ae1a-ec07098c6468,Namespace:calico-system,Attempt:1,} returns sandbox id \"76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9\"" Sep 6 00:09:52.940235 systemd-networkd[1380]: caliaae0e8814f5: Gained IPv6LL Sep 6 00:09:53.260214 systemd-networkd[1380]: califad2a8044f1: Gained IPv6LL Sep 6 00:09:53.468988 containerd[1446]: time="2025-09-06T00:09:53.468940426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:53.469874 containerd[1446]: time="2025-09-06T00:09:53.469846322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 6 00:09:53.470674 containerd[1446]: time="2025-09-06T00:09:53.470487454Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:53.473125 containerd[1446]: time="2025-09-06T00:09:53.473098700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:53.473935 containerd[1446]: time="2025-09-06T00:09:53.473903595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.078547109s" Sep 6 00:09:53.474028 containerd[1446]: time="2025-09-06T00:09:53.473940635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 6 00:09:53.487815 containerd[1446]: time="2025-09-06T00:09:53.487779602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:09:53.493324 containerd[1446]: time="2025-09-06T00:09:53.493284380Z" level=info msg="CreateContainer within sandbox \"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 00:09:53.517600 containerd[1446]: time="2025-09-06T00:09:53.517481451Z" level=info msg="CreateContainer within sandbox \"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c13f5a81c057033caec73fe4112bdee207793da0d4cc47d5bfd79bc04bef743a\"" Sep 6 00:09:53.520071 containerd[1446]: time="2025-09-06T00:09:53.520039897Z" level=info msg="StartContainer for \"c13f5a81c057033caec73fe4112bdee207793da0d4cc47d5bfd79bc04bef743a\"" Sep 6 00:09:53.544911 systemd[1]: Started cri-containerd-c13f5a81c057033caec73fe4112bdee207793da0d4cc47d5bfd79bc04bef743a.scope - libcontainer container c13f5a81c057033caec73fe4112bdee207793da0d4cc47d5bfd79bc04bef743a. Sep 6 00:09:53.573007 containerd[1446]: time="2025-09-06T00:09:53.572954680Z" level=info msg="StartContainer for \"c13f5a81c057033caec73fe4112bdee207793da0d4cc47d5bfd79bc04bef743a\" returns successfully" Sep 6 00:09:53.693770 containerd[1446]: time="2025-09-06T00:09:53.693712272Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:53.694362 containerd[1446]: time="2025-09-06T00:09:53.694323843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 6 00:09:53.696495 containerd[1446]: time="2025-09-06T00:09:53.696376279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 208.554636ms" Sep 6 00:09:53.696495 containerd[1446]: time="2025-09-06T00:09:53.696415960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 00:09:53.698226 containerd[1446]: time="2025-09-06T00:09:53.698045589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 00:09:53.702462 containerd[1446]: time="2025-09-06T00:09:53.702428227Z" level=info msg="CreateContainer within sandbox \"54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:09:53.713817 containerd[1446]: time="2025-09-06T00:09:53.713683348Z" level=info msg="CreateContainer within sandbox \"54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ceae2ee7fde4362146bebb43e81c7a8a76fa713940f9af06fe2db85cf4e4ac36\"" Sep 6 00:09:53.716787 containerd[1446]: time="2025-09-06T00:09:53.716754282Z" level=info msg="StartContainer for \"ceae2ee7fde4362146bebb43e81c7a8a76fa713940f9af06fe2db85cf4e4ac36\"" Sep 6 00:09:53.752180 systemd[1]: Started cri-containerd-ceae2ee7fde4362146bebb43e81c7a8a76fa713940f9af06fe2db85cf4e4ac36.scope - libcontainer container ceae2ee7fde4362146bebb43e81c7a8a76fa713940f9af06fe2db85cf4e4ac36. Sep 6 00:09:53.754944 kubelet[2465]: E0906 00:09:53.754635 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:53.788148 containerd[1446]: time="2025-09-06T00:09:53.788039833Z" level=info msg="StartContainer for \"ceae2ee7fde4362146bebb43e81c7a8a76fa713940f9af06fe2db85cf4e4ac36\" returns successfully" Sep 6 00:09:53.936330 kubelet[2465]: I0906 00:09:53.936086 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:09:54.670980 systemd-networkd[1380]: calie7d6167f246: Gained IPv6LL Sep 6 00:09:54.762461 kubelet[2465]: E0906 00:09:54.761996 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:09:54.774767 kubelet[2465]: I0906 00:09:54.773842 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57978c77c8-2bgst" podStartSLOduration=29.065666279 podStartE2EDuration="30.773826261s" podCreationTimestamp="2025-09-06 00:09:24 +0000 UTC" firstStartedPulling="2025-09-06 00:09:51.989063832 +0000 UTC m=+44.573887590" lastFinishedPulling="2025-09-06 00:09:53.697223814 +0000 UTC m=+46.282047572" observedRunningTime="2025-09-06 00:09:54.773383933 +0000 UTC m=+47.358207691" watchObservedRunningTime="2025-09-06 00:09:54.773826261 +0000 UTC m=+47.358650019" Sep 6 00:09:55.652777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589380870.mount: Deactivated successfully. Sep 6 00:09:56.183155 containerd[1446]: time="2025-09-06T00:09:56.183088658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:56.183757 containerd[1446]: time="2025-09-06T00:09:56.183659828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 6 00:09:56.184757 containerd[1446]: time="2025-09-06T00:09:56.184716966Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:56.187831 containerd[1446]: time="2025-09-06T00:09:56.187798137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:56.188603 containerd[1446]: time="2025-09-06T00:09:56.188561150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.490480361s" Sep 6 00:09:56.188603 containerd[1446]: time="2025-09-06T00:09:56.188597751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 6 00:09:56.190463 containerd[1446]: time="2025-09-06T00:09:56.190229218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 00:09:56.194954 containerd[1446]: time="2025-09-06T00:09:56.194861215Z" level=info msg="CreateContainer within sandbox \"76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 00:09:56.206263 containerd[1446]: time="2025-09-06T00:09:56.206098683Z" level=info msg="CreateContainer within sandbox \"76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b940be4a7c30beb8ff46d4c4ba1c6f0ae27c4a321e87914e029badb0d3c213d8\"" Sep 6 00:09:56.206841 containerd[1446]: time="2025-09-06T00:09:56.206810295Z" level=info msg="StartContainer for \"b940be4a7c30beb8ff46d4c4ba1c6f0ae27c4a321e87914e029badb0d3c213d8\"" Sep 6 00:09:56.247911 systemd[1]: Started cri-containerd-b940be4a7c30beb8ff46d4c4ba1c6f0ae27c4a321e87914e029badb0d3c213d8.scope - libcontainer container b940be4a7c30beb8ff46d4c4ba1c6f0ae27c4a321e87914e029badb0d3c213d8. Sep 6 00:09:56.281263 containerd[1446]: time="2025-09-06T00:09:56.281221780Z" level=info msg="StartContainer for \"b940be4a7c30beb8ff46d4c4ba1c6f0ae27c4a321e87914e029badb0d3c213d8\" returns successfully" Sep 6 00:09:56.726893 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:49398.service - OpenSSH per-connection server daemon (10.0.0.1:49398). Sep 6 00:09:56.791359 kubelet[2465]: I0906 00:09:56.791204 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-cdhv4" podStartSLOduration=25.48014976 podStartE2EDuration="28.791179992s" podCreationTimestamp="2025-09-06 00:09:28 +0000 UTC" firstStartedPulling="2025-09-06 00:09:52.878775819 +0000 UTC m=+45.463599577" lastFinishedPulling="2025-09-06 00:09:56.189806051 +0000 UTC m=+48.774629809" observedRunningTime="2025-09-06 00:09:56.789581645 +0000 UTC m=+49.374405403" watchObservedRunningTime="2025-09-06 00:09:56.791179992 +0000 UTC m=+49.376003710" Sep 6 00:09:56.813802 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 49398 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:09:56.815557 sshd[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:09:56.821688 systemd-logind[1425]: New session 8 of user core. Sep 6 00:09:56.827893 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 6 00:09:57.287322 sshd[5290]: pam_unix(sshd:session): session closed for user core Sep 6 00:09:57.290530 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:49398.service: Deactivated successfully. Sep 6 00:09:57.292318 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:09:57.294570 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:09:57.297243 systemd-logind[1425]: Removed session 8. Sep 6 00:09:57.319228 containerd[1446]: time="2025-09-06T00:09:57.319178765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:57.320255 containerd[1446]: time="2025-09-06T00:09:57.320226102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 6 00:09:57.321428 containerd[1446]: time="2025-09-06T00:09:57.321377721Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:57.323794 containerd[1446]: time="2025-09-06T00:09:57.323593757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:09:57.324343 containerd[1446]: time="2025-09-06T00:09:57.324300689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.134035951s" Sep 6 00:09:57.324343 containerd[1446]: time="2025-09-06T00:09:57.324334969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 6 00:09:57.328694 containerd[1446]: time="2025-09-06T00:09:57.328662360Z" level=info msg="CreateContainer within sandbox \"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 00:09:57.345659 containerd[1446]: time="2025-09-06T00:09:57.345615679Z" level=info msg="CreateContainer within sandbox \"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d81012fda2280b35edbb0d2961cec4a88ed8b86ee3c88292dd0819cd679f43b1\"" Sep 6 00:09:57.346350 containerd[1446]: time="2025-09-06T00:09:57.346234369Z" level=info msg="StartContainer for \"d81012fda2280b35edbb0d2961cec4a88ed8b86ee3c88292dd0819cd679f43b1\"" Sep 6 00:09:57.381160 systemd[1]: run-containerd-runc-k8s.io-d81012fda2280b35edbb0d2961cec4a88ed8b86ee3c88292dd0819cd679f43b1-runc.4byM3v.mount: Deactivated successfully. Sep 6 00:09:57.392954 systemd[1]: Started cri-containerd-d81012fda2280b35edbb0d2961cec4a88ed8b86ee3c88292dd0819cd679f43b1.scope - libcontainer container d81012fda2280b35edbb0d2961cec4a88ed8b86ee3c88292dd0819cd679f43b1. Sep 6 00:09:57.421774 containerd[1446]: time="2025-09-06T00:09:57.421674847Z" level=info msg="StartContainer for \"d81012fda2280b35edbb0d2961cec4a88ed8b86ee3c88292dd0819cd679f43b1\" returns successfully" Sep 6 00:09:57.583163 kubelet[2465]: I0906 00:09:57.583031 2465 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 00:09:57.586641 kubelet[2465]: I0906 00:09:57.585974 2465 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 00:09:57.777292 kubelet[2465]: I0906 00:09:57.777013 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:09:57.792239 kubelet[2465]: I0906 00:09:57.791673 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r2plj" podStartSLOduration=22.58836693 podStartE2EDuration="29.791658959s" podCreationTimestamp="2025-09-06 00:09:28 +0000 UTC" firstStartedPulling="2025-09-06 00:09:50.121881354 +0000 UTC m=+42.706705112" lastFinishedPulling="2025-09-06 00:09:57.325173423 +0000 UTC m=+49.909997141" observedRunningTime="2025-09-06 00:09:57.791358794 +0000 UTC m=+50.376182552" watchObservedRunningTime="2025-09-06 00:09:57.791658959 +0000 UTC m=+50.376482717" Sep 6 00:10:02.300840 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:36290.service - OpenSSH per-connection server daemon (10.0.0.1:36290). Sep 6 00:10:02.375231 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 36290 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:02.377486 sshd[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:02.383243 systemd-logind[1425]: New session 9 of user core. Sep 6 00:10:02.397993 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 6 00:10:02.646188 sshd[5377]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:02.654751 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:36290.service: Deactivated successfully. Sep 6 00:10:02.656682 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:10:02.657373 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:10:02.659439 systemd-logind[1425]: Removed session 9. Sep 6 00:10:03.891702 kubelet[2465]: I0906 00:10:03.891506 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:10:06.182433 kubelet[2465]: I0906 00:10:06.182375 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:10:07.473297 containerd[1446]: time="2025-09-06T00:10:07.472595540Z" level=info msg="StopPodSandbox for \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\"" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.531 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r2plj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164", Pod:"csi-node-driver-r2plj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8a96f891182", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.532 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.532 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" iface="eth0" netns="" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.532 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.532 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.555 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.555 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.555 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.564 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.564 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.568 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:07.572754 containerd[1446]: 2025-09-06 00:10:07.570 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.573579 containerd[1446]: time="2025-09-06T00:10:07.573443846Z" level=info msg="TearDown network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\" successfully" Sep 6 00:10:07.573579 containerd[1446]: time="2025-09-06T00:10:07.573475326Z" level=info msg="StopPodSandbox for \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\" returns successfully" Sep 6 00:10:07.574055 containerd[1446]: time="2025-09-06T00:10:07.574022654Z" level=info msg="RemovePodSandbox for \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\"" Sep 6 00:10:07.575356 containerd[1446]: time="2025-09-06T00:10:07.575292072Z" level=info msg="Forcibly stopping sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\"" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.611 [WARNING][5521] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r2plj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20b69192-52d9-4abd-8a15-9ae7c5a2f6fb", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fb265bb0269faf031aeaf64df8ea21bbfb6d92b168db432e6ed855554597164", Pod:"csi-node-driver-r2plj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8a96f891182", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.611 [INFO][5521] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.611 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" iface="eth0" netns="" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.611 [INFO][5521] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.611 [INFO][5521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.630 [INFO][5530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.630 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.630 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.640 [WARNING][5530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.640 [INFO][5530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" HandleID="k8s-pod-network.936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Workload="localhost-k8s-csi--node--driver--r2plj-eth0" Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.641 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:07.645293 containerd[1446]: 2025-09-06 00:10:07.643 [INFO][5521] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9" Sep 6 00:10:07.645717 containerd[1446]: time="2025-09-06T00:10:07.645337302Z" level=info msg="TearDown network for sandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\" successfully" Sep 6 00:10:07.654977 containerd[1446]: time="2025-09-06T00:10:07.654774436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:07.654977 containerd[1446]: time="2025-09-06T00:10:07.654877797Z" level=info msg="RemovePodSandbox \"936886d17c5a7539f052bc762448b046308fd252ee7964f63b2d048fdd24bad9\" returns successfully" Sep 6 00:10:07.655560 containerd[1446]: time="2025-09-06T00:10:07.655529807Z" level=info msg="StopPodSandbox for \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\"" Sep 6 00:10:07.658537 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:36296.service - OpenSSH per-connection server daemon (10.0.0.1:36296). Sep 6 00:10:07.709625 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 36296 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:07.714525 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:07.719254 systemd-logind[1425]: New session 10 of user core. Sep 6 00:10:07.723906 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.698 [WARNING][5550] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0", GenerateName:"calico-kube-controllers-5cd46bb99-", Namespace:"calico-system", SelfLink:"", UID:"57617a63-d31f-472d-91f1-03fac276c695", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd46bb99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d", Pod:"calico-kube-controllers-5cd46bb99-vsnrh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid58199084d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.698 [INFO][5550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.698 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" iface="eth0" netns="" Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.698 [INFO][5550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.698 [INFO][5550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.725 [INFO][5559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.725 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.725 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.735 [WARNING][5559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.735 [INFO][5559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.738 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:07.742018 containerd[1446]: 2025-09-06 00:10:07.740 [INFO][5550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.742606 containerd[1446]: time="2025-09-06T00:10:07.742130151Z" level=info msg="TearDown network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\" successfully" Sep 6 00:10:07.742606 containerd[1446]: time="2025-09-06T00:10:07.742157231Z" level=info msg="StopPodSandbox for \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\" returns successfully" Sep 6 00:10:07.743070 containerd[1446]: time="2025-09-06T00:10:07.743043924Z" level=info msg="RemovePodSandbox for \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\"" Sep 6 00:10:07.743122 containerd[1446]: time="2025-09-06T00:10:07.743080564Z" level=info msg="Forcibly stopping sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\"" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.784 [WARNING][5578] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0", GenerateName:"calico-kube-controllers-5cd46bb99-", Namespace:"calico-system", SelfLink:"", UID:"57617a63-d31f-472d-91f1-03fac276c695", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd46bb99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6402d7c49275be5c6e13406e611368069e008b8c20d4dd90698d9f559b2ec29d", Pod:"calico-kube-controllers-5cd46bb99-vsnrh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid58199084d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.784 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.784 [INFO][5578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" iface="eth0" netns="" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.784 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.784 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.807 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.808 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.808 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.818 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.818 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" HandleID="k8s-pod-network.2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Workload="localhost-k8s-calico--kube--controllers--5cd46bb99--vsnrh-eth0" Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.820 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:07.823872 containerd[1446]: 2025-09-06 00:10:07.822 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403" Sep 6 00:10:07.824274 containerd[1446]: time="2025-09-06T00:10:07.823927948Z" level=info msg="TearDown network for sandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\" successfully" Sep 6 00:10:07.828861 containerd[1446]: time="2025-09-06T00:10:07.828819817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:07.829041 containerd[1446]: time="2025-09-06T00:10:07.828906098Z" level=info msg="RemovePodSandbox \"2e4e987ed1902f5e779dbed4ca97d2e0b533d6ccfb5887c2dd3acfdaae6ff403\" returns successfully" Sep 6 00:10:07.829423 containerd[1446]: time="2025-09-06T00:10:07.829397145Z" level=info msg="StopPodSandbox for \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\"" Sep 6 00:10:07.899153 sshd[5545]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:07.907657 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:36296.service: Deactivated successfully. Sep 6 00:10:07.911239 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:10:07.914123 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.867 [WARNING][5614] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3", Pod:"coredns-674b8bbfcf-jn4fk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad2a8044f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.867 [INFO][5614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.867 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" iface="eth0" netns="" Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.867 [INFO][5614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.868 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.898 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.898 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.898 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.908 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.908 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.911 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:07.917847 containerd[1446]: 2025-09-06 00:10:07.914 [INFO][5614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.918202 containerd[1446]: time="2025-09-06T00:10:07.917880996Z" level=info msg="TearDown network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\" successfully" Sep 6 00:10:07.918202 containerd[1446]: time="2025-09-06T00:10:07.917906636Z" level=info msg="StopPodSandbox for \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\" returns successfully" Sep 6 00:10:07.918531 containerd[1446]: time="2025-09-06T00:10:07.918501605Z" level=info msg="RemovePodSandbox for \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\"" Sep 6 00:10:07.918531 containerd[1446]: time="2025-09-06T00:10:07.918533205Z" level=info msg="Forcibly stopping sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\"" Sep 6 00:10:07.922018 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:36308.service - OpenSSH per-connection server daemon (10.0.0.1:36308). Sep 6 00:10:07.923016 systemd-logind[1425]: Removed session 10. Sep 6 00:10:07.963537 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 36308 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:07.965129 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:07.971381 systemd-logind[1425]: New session 11 of user core. Sep 6 00:10:07.976940 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.952 [WARNING][5644] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8fe42dcf-c7fe-4f83-b5d8-07f73b02320a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a3d7d197dc4b36529fa8c8d21e835118f93cff1aeca67c6a75f42732ff5e6d3", Pod:"coredns-674b8bbfcf-jn4fk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad2a8044f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.952 [INFO][5644] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.952 [INFO][5644] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" iface="eth0" netns="" Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.952 [INFO][5644] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.952 [INFO][5644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.974 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.975 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.975 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.984 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.984 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" HandleID="k8s-pod-network.5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Workload="localhost-k8s-coredns--674b8bbfcf--jn4fk-eth0" Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.985 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:07.990597 containerd[1446]: 2025-09-06 00:10:07.987 [INFO][5644] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1" Sep 6 00:10:07.991005 containerd[1446]: time="2025-09-06T00:10:07.990630665Z" level=info msg="TearDown network for sandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\" successfully" Sep 6 00:10:07.993808 containerd[1446]: time="2025-09-06T00:10:07.993779109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:07.993905 containerd[1446]: time="2025-09-06T00:10:07.993851670Z" level=info msg="RemovePodSandbox \"5df1e2af56f8f3f8b77f42799635f8dbc5abbff1c3fe5d2656bb3802a613ecd1\" returns successfully" Sep 6 00:10:07.994306 containerd[1446]: time="2025-09-06T00:10:07.994274956Z" level=info msg="StopPodSandbox for \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\"" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.027 [WARNING][5672] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msd89-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74911617-3e52-41e1-a84e-8da0e31464f5", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2", Pod:"coredns-674b8bbfcf-msd89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide4cf757f26", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.028 [INFO][5672] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.028 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" iface="eth0" netns="" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.028 [INFO][5672] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.028 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.047 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.047 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.047 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.057 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.057 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.059 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.064508 containerd[1446]: 2025-09-06 00:10:08.062 [INFO][5672] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.064987 containerd[1446]: time="2025-09-06T00:10:08.064537620Z" level=info msg="TearDown network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\" successfully" Sep 6 00:10:08.064987 containerd[1446]: time="2025-09-06T00:10:08.064566740Z" level=info msg="StopPodSandbox for \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\" returns successfully" Sep 6 00:10:08.065069 containerd[1446]: time="2025-09-06T00:10:08.065031586Z" level=info msg="RemovePodSandbox for \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\"" Sep 6 00:10:08.065091 containerd[1446]: time="2025-09-06T00:10:08.065075467Z" level=info msg="Forcibly stopping sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\"" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.103 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msd89-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74911617-3e52-41e1-a84e-8da0e31464f5", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4b35c730cf24ded5fc6d739fe4f5df437890bf0d63def458744c7534cc5ade2", Pod:"coredns-674b8bbfcf-msd89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide4cf757f26", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.103 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.103 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" iface="eth0" netns="" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.103 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.103 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.132 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.132 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.132 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.146 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.146 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" HandleID="k8s-pod-network.42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Workload="localhost-k8s-coredns--674b8bbfcf--msd89-eth0" Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.149 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.162591 containerd[1446]: 2025-09-06 00:10:08.152 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1" Sep 6 00:10:08.163096 containerd[1446]: time="2025-09-06T00:10:08.162634711Z" level=info msg="TearDown network for sandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\" successfully" Sep 6 00:10:08.196074 containerd[1446]: time="2025-09-06T00:10:08.195894216Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:08.196074 containerd[1446]: time="2025-09-06T00:10:08.196012258Z" level=info msg="RemovePodSandbox \"42130af9e2635c73848b1034534874e2491a521201a7c3a1ef495f19adb9fcc1\" returns successfully" Sep 6 00:10:08.197059 containerd[1446]: time="2025-09-06T00:10:08.196615346Z" level=info msg="StopPodSandbox for \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\"" Sep 6 00:10:08.210689 sshd[5633]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:08.220407 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:36308.service: Deactivated successfully. Sep 6 00:10:08.226063 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:10:08.229181 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:10:08.241669 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:36324.service - OpenSSH per-connection server daemon (10.0.0.1:36324). Sep 6 00:10:08.242906 systemd-logind[1425]: Removed session 11. Sep 6 00:10:08.282592 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 36324 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:08.284531 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:08.291205 systemd-logind[1425]: New session 12 of user core. Sep 6 00:10:08.296937 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.269 [WARNING][5731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"28c61605-48bf-420c-9e67-dd95e49a735f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6", Pod:"calico-apiserver-57978c77c8-87cm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0067f3b4759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.270 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.270 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" iface="eth0" netns="" Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.270 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.270 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.295 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.295 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.295 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.304 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.304 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.305 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.309385 containerd[1446]: 2025-09-06 00:10:08.307 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.309887 containerd[1446]: time="2025-09-06T00:10:08.309430844Z" level=info msg="TearDown network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\" successfully" Sep 6 00:10:08.309887 containerd[1446]: time="2025-09-06T00:10:08.309455204Z" level=info msg="StopPodSandbox for \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\" returns successfully" Sep 6 00:10:08.310010 containerd[1446]: time="2025-09-06T00:10:08.309980411Z" level=info msg="RemovePodSandbox for \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\"" Sep 6 00:10:08.310048 containerd[1446]: time="2025-09-06T00:10:08.310016572Z" level=info msg="Forcibly stopping sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\"" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.345 [WARNING][5764] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"28c61605-48bf-420c-9e67-dd95e49a735f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b81e62e58d94c2b3e634908218357c87b3ce1d58a3fa829c3d6709ff6e9f60d6", Pod:"calico-apiserver-57978c77c8-87cm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0067f3b4759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.345 [INFO][5764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.345 [INFO][5764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" iface="eth0" netns="" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.345 [INFO][5764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.345 [INFO][5764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.365 [INFO][5774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.366 [INFO][5774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.366 [INFO][5774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.378 [WARNING][5774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.378 [INFO][5774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" HandleID="k8s-pod-network.4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Workload="localhost-k8s-calico--apiserver--57978c77c8--87cm2-eth0" Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.379 [INFO][5774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.383434 containerd[1446]: 2025-09-06 00:10:08.381 [INFO][5764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f" Sep 6 00:10:08.383890 containerd[1446]: time="2025-09-06T00:10:08.383481079Z" level=info msg="TearDown network for sandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\" successfully" Sep 6 00:10:08.386612 containerd[1446]: time="2025-09-06T00:10:08.386571122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:08.386693 containerd[1446]: time="2025-09-06T00:10:08.386653123Z" level=info msg="RemovePodSandbox \"4fe4e9f70839b5b59c15db657dcdafe34cbc03d8cecc6979ae0b8660149c513f\" returns successfully" Sep 6 00:10:08.387186 containerd[1446]: time="2025-09-06T00:10:08.387162851Z" level=info msg="StopPodSandbox for \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\"" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.435 [WARNING][5798] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" WorkloadEndpoint="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.436 [INFO][5798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.436 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" iface="eth0" netns="" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.436 [INFO][5798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.436 [INFO][5798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.467 [INFO][5806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.467 [INFO][5806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.467 [INFO][5806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.476 [WARNING][5806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.476 [INFO][5806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.478 [INFO][5806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.483474 containerd[1446]: 2025-09-06 00:10:08.481 [INFO][5798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.483474 containerd[1446]: time="2025-09-06T00:10:08.483452317Z" level=info msg="TearDown network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\" successfully" Sep 6 00:10:08.485480 containerd[1446]: time="2025-09-06T00:10:08.483476837Z" level=info msg="StopPodSandbox for \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\" returns successfully" Sep 6 00:10:08.485480 containerd[1446]: time="2025-09-06T00:10:08.484052885Z" level=info msg="RemovePodSandbox for \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\"" Sep 6 00:10:08.485480 containerd[1446]: time="2025-09-06T00:10:08.484100206Z" level=info msg="Forcibly stopping sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\"" Sep 6 00:10:08.484707 sshd[5742]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:08.489257 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:36324.service: Deactivated successfully. Sep 6 00:10:08.491391 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:10:08.492208 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:10:08.493141 systemd-logind[1425]: Removed session 12. Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.525 [WARNING][5827] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" WorkloadEndpoint="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.525 [INFO][5827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.525 [INFO][5827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" iface="eth0" netns="" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.525 [INFO][5827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.525 [INFO][5827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.543 [INFO][5836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.543 [INFO][5836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.543 [INFO][5836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.552 [WARNING][5836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.552 [INFO][5836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" HandleID="k8s-pod-network.6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Workload="localhost-k8s-whisker--5b585579bb--pgk4g-eth0" Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.554 [INFO][5836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.557517 containerd[1446]: 2025-09-06 00:10:08.555 [INFO][5827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878" Sep 6 00:10:08.557919 containerd[1446]: time="2025-09-06T00:10:08.557560993Z" level=info msg="TearDown network for sandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\" successfully" Sep 6 00:10:08.563605 containerd[1446]: time="2025-09-06T00:10:08.563561317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:08.563880 containerd[1446]: time="2025-09-06T00:10:08.563641518Z" level=info msg="RemovePodSandbox \"6a42d498079f9f2b70774c7806f5f5297350a9b04a0edf73168ae800d8712878\" returns successfully" Sep 6 00:10:08.564535 containerd[1446]: time="2025-09-06T00:10:08.564473650Z" level=info msg="StopPodSandbox for \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\"" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.599 [WARNING][5854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"310f0b3e-872d-47ec-bab7-107416283ff9", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a", Pod:"calico-apiserver-57978c77c8-2bgst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaae0e8814f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.600 [INFO][5854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.600 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" iface="eth0" netns="" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.600 [INFO][5854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.600 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.619 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.619 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.619 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.630 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.630 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.631 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.635468 containerd[1446]: 2025-09-06 00:10:08.633 [INFO][5854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.635468 containerd[1446]: time="2025-09-06T00:10:08.635460122Z" level=info msg="TearDown network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\" successfully" Sep 6 00:10:08.635995 containerd[1446]: time="2025-09-06T00:10:08.635484603Z" level=info msg="StopPodSandbox for \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\" returns successfully" Sep 6 00:10:08.636077 containerd[1446]: time="2025-09-06T00:10:08.636035090Z" level=info msg="RemovePodSandbox for \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\"" Sep 6 00:10:08.636116 containerd[1446]: time="2025-09-06T00:10:08.636083811Z" level=info msg="Forcibly stopping sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\"" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.671 [WARNING][5879] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0", GenerateName:"calico-apiserver-57978c77c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"310f0b3e-872d-47ec-bab7-107416283ff9", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57978c77c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54cc13e31a9aa04d7a6bf7aa9fc6e59e50e7dac6caeafa365a488540b510b49a", Pod:"calico-apiserver-57978c77c8-2bgst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaae0e8814f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.671 [INFO][5879] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.671 [INFO][5879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" iface="eth0" netns="" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.672 [INFO][5879] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.672 [INFO][5879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.691 [INFO][5887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.691 [INFO][5887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.691 [INFO][5887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.700 [WARNING][5887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.700 [INFO][5887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" HandleID="k8s-pod-network.5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Workload="localhost-k8s-calico--apiserver--57978c77c8--2bgst-eth0" Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.701 [INFO][5887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.705387 containerd[1446]: 2025-09-06 00:10:08.703 [INFO][5879] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2" Sep 6 00:10:08.705387 containerd[1446]: time="2025-09-06T00:10:08.705342379Z" level=info msg="TearDown network for sandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\" successfully" Sep 6 00:10:08.708147 containerd[1446]: time="2025-09-06T00:10:08.708118858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:08.708282 containerd[1446]: time="2025-09-06T00:10:08.708180059Z" level=info msg="RemovePodSandbox \"5a9b8b21cdd26d9863f7f1a30afb75e25d7e06c904ce8058a90bfed09baa6fc2\" returns successfully" Sep 6 00:10:08.708684 containerd[1446]: time="2025-09-06T00:10:08.708631065Z" level=info msg="StopPodSandbox for \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\"" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.742 [WARNING][5905] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--cdhv4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b270af4a-cf4b-43ff-ae1a-ec07098c6468", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9", Pod:"goldmane-54d579b49d-cdhv4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie7d6167f246", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.742 [INFO][5905] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.742 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" iface="eth0" netns="" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.742 [INFO][5905] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.742 [INFO][5905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.766 [INFO][5913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.767 [INFO][5913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.767 [INFO][5913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.777 [WARNING][5913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.777 [INFO][5913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.778 [INFO][5913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.782760 containerd[1446]: 2025-09-06 00:10:08.780 [INFO][5905] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.782760 containerd[1446]: time="2025-09-06T00:10:08.782554339Z" level=info msg="TearDown network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\" successfully" Sep 6 00:10:08.782760 containerd[1446]: time="2025-09-06T00:10:08.782581859Z" level=info msg="StopPodSandbox for \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\" returns successfully" Sep 6 00:10:08.784787 containerd[1446]: time="2025-09-06T00:10:08.784757570Z" level=info msg="RemovePodSandbox for \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\"" Sep 6 00:10:08.784871 containerd[1446]: time="2025-09-06T00:10:08.784794810Z" level=info msg="Forcibly stopping sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\"" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.830 [WARNING][5931] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--cdhv4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b270af4a-cf4b-43ff-ae1a-ec07098c6468", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76b739ed418f94f0abeb1395785dc38d7960fde07d264fc7cc01675c3cb293e9", Pod:"goldmane-54d579b49d-cdhv4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie7d6167f246", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.830 [INFO][5931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.830 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" iface="eth0" netns="" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.830 [INFO][5931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.830 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.875 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.875 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.875 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.884 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.884 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" HandleID="k8s-pod-network.a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Workload="localhost-k8s-goldmane--54d579b49d--cdhv4-eth0" Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.885 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:10:08.889322 containerd[1446]: 2025-09-06 00:10:08.887 [INFO][5931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2" Sep 6 00:10:08.890063 containerd[1446]: time="2025-09-06T00:10:08.889303992Z" level=info msg="TearDown network for sandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\" successfully" Sep 6 00:10:08.893045 containerd[1446]: time="2025-09-06T00:10:08.892925842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:10:08.893045 containerd[1446]: time="2025-09-06T00:10:08.893001963Z" level=info msg="RemovePodSandbox \"a4dd79927dff2ddae6915cf42975751854df80461c3901c342147f05beb960c2\" returns successfully" Sep 6 00:10:13.503388 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:60542.service - OpenSSH per-connection server daemon (10.0.0.1:60542). Sep 6 00:10:13.551896 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 60542 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:13.553278 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:13.557894 systemd-logind[1425]: New session 13 of user core. Sep 6 00:10:13.566965 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 6 00:10:13.728634 sshd[5955]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:13.733299 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:60542.service: Deactivated successfully. Sep 6 00:10:13.736299 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:10:13.736988 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:10:13.738184 systemd-logind[1425]: Removed session 13. Sep 6 00:10:18.738638 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:60552.service - OpenSSH per-connection server daemon (10.0.0.1:60552). Sep 6 00:10:18.777433 sshd[5972]: Accepted publickey for core from 10.0.0.1 port 60552 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:18.778885 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:18.783337 systemd-logind[1425]: New session 14 of user core. Sep 6 00:10:18.792937 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 6 00:10:18.974289 sshd[5972]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:18.978787 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:60552.service: Deactivated successfully. Sep 6 00:10:18.980558 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:10:18.981129 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:10:18.982117 systemd-logind[1425]: Removed session 14. Sep 6 00:10:20.492404 kubelet[2465]: E0906 00:10:20.492305 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:10:23.985858 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:48040.service - OpenSSH per-connection server daemon (10.0.0.1:48040). Sep 6 00:10:24.041887 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 48040 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:24.044304 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:24.049493 systemd-logind[1425]: New session 15 of user core. Sep 6 00:10:24.055980 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 6 00:10:24.586988 sshd[5994]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:24.591098 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:48040.service: Deactivated successfully. Sep 6 00:10:24.594487 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:10:24.595409 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:10:24.596338 systemd-logind[1425]: Removed session 15. Sep 6 00:10:28.490181 kubelet[2465]: E0906 00:10:28.490092 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:10:29.598681 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:48048.service - OpenSSH per-connection server daemon (10.0.0.1:48048). Sep 6 00:10:29.640869 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 48048 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:29.642143 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:29.647991 systemd-logind[1425]: New session 16 of user core. Sep 6 00:10:29.657914 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 6 00:10:29.866805 sshd[6032]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:29.876992 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:48048.service: Deactivated successfully. Sep 6 00:10:29.879164 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:10:29.880517 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:10:29.888010 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:48064.service - OpenSSH per-connection server daemon (10.0.0.1:48064). Sep 6 00:10:29.889410 systemd-logind[1425]: Removed session 16. Sep 6 00:10:29.922509 sshd[6046]: Accepted publickey for core from 10.0.0.1 port 48064 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:29.926330 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:29.930620 systemd-logind[1425]: New session 17 of user core. Sep 6 00:10:29.942927 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 6 00:10:30.183137 sshd[6046]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:30.195450 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:48064.service: Deactivated successfully. Sep 6 00:10:30.197908 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:10:30.200278 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:10:30.205053 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:48200.service - OpenSSH per-connection server daemon (10.0.0.1:48200). Sep 6 00:10:30.206415 systemd-logind[1425]: Removed session 17. Sep 6 00:10:30.241608 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 48200 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:30.243012 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:30.248805 systemd-logind[1425]: New session 18 of user core. Sep 6 00:10:30.260984 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 6 00:10:30.948116 sshd[6058]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:30.956063 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:48200.service: Deactivated successfully. Sep 6 00:10:30.959338 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:10:30.960973 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:10:30.971666 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:48210.service - OpenSSH per-connection server daemon (10.0.0.1:48210). Sep 6 00:10:30.973664 systemd-logind[1425]: Removed session 18. Sep 6 00:10:31.006259 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 48210 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:31.007521 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:31.012244 systemd-logind[1425]: New session 19 of user core. Sep 6 00:10:31.019884 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 6 00:10:31.480129 sshd[6077]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:31.489498 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:48210.service: Deactivated successfully. Sep 6 00:10:31.490203 kubelet[2465]: E0906 00:10:31.490175 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:10:31.492230 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:10:31.494291 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:10:31.503173 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:48216.service - OpenSSH per-connection server daemon (10.0.0.1:48216). Sep 6 00:10:31.504426 systemd-logind[1425]: Removed session 19. Sep 6 00:10:31.533591 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 48216 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:31.535130 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:31.539793 systemd-logind[1425]: New session 20 of user core. Sep 6 00:10:31.550932 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 6 00:10:31.663919 sshd[6089]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:31.667056 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:48216.service: Deactivated successfully. Sep 6 00:10:31.668955 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:10:31.669579 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:10:31.670726 systemd-logind[1425]: Removed session 20. Sep 6 00:10:33.949291 systemd[1]: run-containerd-runc-k8s.io-3d4ddd8842e92ee38d3cde37baa8a9bb76bd662920f4dea4341ab3a76ccc5944-runc.L0YN8f.mount: Deactivated successfully. Sep 6 00:10:34.490990 kubelet[2465]: E0906 00:10:34.490940 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:10:36.679628 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:48232.service - OpenSSH per-connection server daemon (10.0.0.1:48232). Sep 6 00:10:36.719272 sshd[6167]: Accepted publickey for core from 10.0.0.1 port 48232 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:36.720727 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:36.726395 systemd-logind[1425]: New session 21 of user core. Sep 6 00:10:36.737546 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 6 00:10:36.885150 sshd[6167]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:36.888845 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:48232.service: Deactivated successfully. Sep 6 00:10:36.890727 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:10:36.892007 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:10:36.893598 systemd-logind[1425]: Removed session 21. Sep 6 00:10:41.913806 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:33578.service - OpenSSH per-connection server daemon (10.0.0.1:33578). Sep 6 00:10:41.962829 sshd[6181]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:10:41.965412 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:10:41.970509 systemd-logind[1425]: New session 22 of user core. Sep 6 00:10:41.981958 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 6 00:10:42.129360 sshd[6181]: pam_unix(sshd:session): session closed for user core Sep 6 00:10:42.134079 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:33578.service: Deactivated successfully. Sep 6 00:10:42.136905 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:10:42.139358 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:10:42.140423 systemd-logind[1425]: Removed session 22.