Sep 5 23:55:05.859211 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:55:05.859281 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:55:05.859292 kernel: KASLR enabled Sep 5 23:55:05.859298 kernel: efi: EFI v2.7 by EDK II Sep 5 23:55:05.859303 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 5 23:55:05.859309 kernel: random: crng init done Sep 5 23:55:05.859316 kernel: ACPI: Early table checksum verification disabled Sep 5 23:55:05.859322 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 5 23:55:05.859328 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:55:05.859336 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859342 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859348 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859354 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859360 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859368 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859376 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859382 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859389 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:55:05.859395 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 23:55:05.859401 kernel: NUMA: Failed to initialise from firmware Sep 5 23:55:05.859408 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:55:05.859414 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 5 23:55:05.859420 kernel: Zone ranges: Sep 5 23:55:05.859427 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:55:05.859433 kernel: DMA32 empty Sep 5 23:55:05.859441 kernel: Normal empty Sep 5 23:55:05.859447 kernel: Movable zone start for each node Sep 5 23:55:05.859453 kernel: Early memory node ranges Sep 5 23:55:05.859460 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 5 23:55:05.859466 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 5 23:55:05.859472 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 5 23:55:05.859478 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 23:55:05.859484 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 23:55:05.859491 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 23:55:05.859497 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 23:55:05.859503 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:55:05.859509 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 23:55:05.859517 kernel: psci: probing for conduit method from ACPI. Sep 5 23:55:05.859523 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:55:05.859529 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:55:05.859538 kernel: psci: Trusted OS migration not required Sep 5 23:55:05.859545 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:55:05.859552 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:55:05.859560 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:55:05.859567 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:55:05.859573 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 23:55:05.859580 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:55:05.859587 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:55:05.859594 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:55:05.859601 kernel: CPU features: detected: Spectre-v4 Sep 5 23:55:05.859607 kernel: CPU features: detected: Spectre-BHB Sep 5 23:55:05.859614 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:55:05.859621 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:55:05.859629 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:55:05.859635 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:55:05.859642 kernel: alternatives: applying boot alternatives Sep 5 23:55:05.859650 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:55:05.859657 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:55:05.859672 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:55:05.859680 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:55:05.859687 kernel: Fallback order for Node 0: 0 Sep 5 23:55:05.859693 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 5 23:55:05.859700 kernel: Policy zone: DMA Sep 5 23:55:05.859708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:55:05.859716 kernel: software IO TLB: area num 4. Sep 5 23:55:05.859723 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 5 23:55:05.859731 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 5 23:55:05.859738 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 23:55:05.859744 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:55:05.859752 kernel: rcu: RCU event tracing is enabled. Sep 5 23:55:05.859759 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 23:55:05.859766 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:55:05.859773 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:55:05.859780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:55:05.859786 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 23:55:05.859795 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:55:05.859801 kernel: GICv3: 256 SPIs implemented Sep 5 23:55:05.859808 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:55:05.859815 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:55:05.859821 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:55:05.859828 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:55:05.859841 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:55:05.859848 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:55:05.859855 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:55:05.859861 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 5 23:55:05.859868 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 5 23:55:05.859875 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:55:05.859884 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:55:05.859891 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:55:05.859898 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:55:05.859904 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:55:05.859911 kernel: arm-pv: using stolen time PV Sep 5 23:55:05.859918 kernel: Console: colour dummy device 80x25 Sep 5 23:55:05.859925 kernel: ACPI: Core revision 20230628 Sep 5 23:55:05.859932 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:55:05.859964 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:55:05.859977 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:55:05.859985 kernel: landlock: Up and running. Sep 5 23:55:05.859991 kernel: SELinux: Initializing. Sep 5 23:55:05.859998 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:55:05.860005 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:55:05.860012 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 23:55:05.860019 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 23:55:05.860026 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:55:05.860033 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:55:05.860040 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:55:05.860048 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:55:05.860055 kernel: Remapping and enabling EFI services. Sep 5 23:55:05.860062 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:55:05.860068 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:55:05.860075 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:55:05.860082 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 5 23:55:05.860089 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:55:05.860096 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:55:05.860102 kernel: Detected PIPT I-cache on CPU2 Sep 5 23:55:05.860109 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 23:55:05.860118 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 5 23:55:05.860125 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:55:05.860136 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 23:55:05.860145 kernel: Detected PIPT I-cache on CPU3 Sep 5 23:55:05.860157 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 23:55:05.860165 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 5 23:55:05.860172 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:55:05.860179 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 23:55:05.860186 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 23:55:05.860194 kernel: SMP: Total of 4 processors activated. Sep 5 23:55:05.860202 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:55:05.860209 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:55:05.860223 kernel: CPU features: detected: Common not Private translations Sep 5 23:55:05.860230 kernel: CPU features: detected: CRC32 instructions Sep 5 23:55:05.860237 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 23:55:05.860245 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:55:05.860252 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:55:05.860261 kernel: CPU features: detected: Privileged Access Never Sep 5 23:55:05.860269 kernel: CPU features: detected: RAS Extension Support Sep 5 23:55:05.860276 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:55:05.860283 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:55:05.860290 kernel: alternatives: applying system-wide alternatives Sep 5 23:55:05.860297 kernel: devtmpfs: initialized Sep 5 23:55:05.860305 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:55:05.860313 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 23:55:05.860320 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:55:05.860329 kernel: SMBIOS 3.0.0 present. Sep 5 23:55:05.860336 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 5 23:55:05.860343 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:55:05.860351 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:55:05.860358 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:55:05.860365 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:55:05.860372 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:55:05.860380 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 5 23:55:05.860387 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:55:05.860395 kernel: cpuidle: using governor menu Sep 5 23:55:05.860403 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:55:05.860410 kernel: ASID allocator initialised with 32768 entries Sep 5 23:55:05.860417 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:55:05.860424 kernel: Serial: AMBA PL011 UART driver Sep 5 23:55:05.860432 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:55:05.860439 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:55:05.860446 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:55:05.860453 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:55:05.860462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:55:05.860469 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:55:05.860476 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:55:05.860483 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:55:05.860491 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:55:05.860498 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:55:05.860505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:55:05.860512 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:55:05.860519 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:55:05.860528 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:55:05.860535 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:55:05.860542 kernel: ACPI: Interpreter enabled Sep 5 23:55:05.860549 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:55:05.860557 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:55:05.860573 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:55:05.860580 kernel: printk: console [ttyAMA0] enabled Sep 5 23:55:05.860587 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:55:05.860752 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:55:05.860847 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:55:05.860914 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:55:05.860986 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:55:05.861051 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:55:05.861060 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:55:05.861068 kernel: PCI host bridge to bus 0000:00 Sep 5 23:55:05.861139 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:55:05.861202 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:55:05.861275 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:55:05.861334 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:55:05.861420 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:55:05.861506 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 23:55:05.861598 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 5 23:55:05.861677 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 5 23:55:05.861746 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:55:05.861814 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:55:05.861880 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 5 23:55:05.861948 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 5 23:55:05.862010 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:55:05.862070 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:55:05.862132 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:55:05.862142 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:55:05.862169 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:55:05.862186 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:55:05.862194 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:55:05.862201 kernel: iommu: Default domain type: Translated Sep 5 23:55:05.862208 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:55:05.862233 kernel: efivars: Registered efivars operations Sep 5 23:55:05.862243 kernel: vgaarb: loaded Sep 5 23:55:05.862250 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:55:05.862257 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:55:05.862265 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:55:05.862272 kernel: pnp: PnP ACPI init Sep 5 23:55:05.862350 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:55:05.862362 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:55:05.862369 kernel: NET: Registered PF_INET protocol family Sep 5 23:55:05.862377 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:55:05.862387 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:55:05.862395 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:55:05.862402 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:55:05.862410 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:55:05.862417 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:55:05.862425 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:55:05.862432 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:55:05.862440 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:55:05.862448 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:55:05.862455 kernel: kvm [1]: HYP mode not available Sep 5 23:55:05.862463 kernel: Initialise system trusted keyrings Sep 5 23:55:05.862470 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:55:05.862477 kernel: Key type asymmetric registered Sep 5 23:55:05.862484 kernel: Asymmetric key parser 'x509' registered Sep 5 23:55:05.862491 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:55:05.862499 kernel: io scheduler mq-deadline registered Sep 5 23:55:05.862506 kernel: io scheduler kyber registered Sep 5 23:55:05.862515 kernel: io scheduler bfq registered Sep 5 23:55:05.862524 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:55:05.862531 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:55:05.862539 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:55:05.862608 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 23:55:05.862618 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:55:05.862626 kernel: thunder_xcv, ver 1.0 Sep 5 23:55:05.862633 kernel: thunder_bgx, ver 1.0 Sep 5 23:55:05.862640 kernel: nicpf, ver 1.0 Sep 5 23:55:05.862647 kernel: nicvf, ver 1.0 Sep 5 23:55:05.862735 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:55:05.862800 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:55:05 UTC (1757116505) Sep 5 23:55:05.862810 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:55:05.862817 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:55:05.862825 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:55:05.862832 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:55:05.862840 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:55:05.862847 kernel: Segment Routing with IPv6 Sep 5 23:55:05.862857 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:55:05.862864 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:55:05.862872 kernel: Key type dns_resolver registered Sep 5 23:55:05.862879 kernel: registered taskstats version 1 Sep 5 23:55:05.862886 kernel: Loading compiled-in X.509 certificates Sep 5 23:55:05.862894 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:55:05.862901 kernel: Key type .fscrypt registered Sep 5 23:55:05.862909 kernel: Key type fscrypt-provisioning registered Sep 5 23:55:05.862916 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:55:05.862924 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:55:05.862932 kernel: ima: No architecture policies found Sep 5 23:55:05.862939 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:55:05.862946 kernel: clk: Disabling unused clocks Sep 5 23:55:05.862953 kernel: Freeing unused kernel memory: 39424K Sep 5 23:55:05.862961 kernel: Run /init as init process Sep 5 23:55:05.862968 kernel: with arguments: Sep 5 23:55:05.862975 kernel: /init Sep 5 23:55:05.862982 kernel: with environment: Sep 5 23:55:05.862991 kernel: HOME=/ Sep 5 23:55:05.862998 kernel: TERM=linux Sep 5 23:55:05.863005 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:55:05.863015 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:55:05.863025 systemd[1]: Detected virtualization kvm. Sep 5 23:55:05.863033 systemd[1]: Detected architecture arm64. Sep 5 23:55:05.863041 systemd[1]: Running in initrd. Sep 5 23:55:05.863050 systemd[1]: No hostname configured, using default hostname. Sep 5 23:55:05.863057 systemd[1]: Hostname set to . Sep 5 23:55:05.863065 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:55:05.863073 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:55:05.863082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:55:05.863090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:55:05.863098 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:55:05.863107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:55:05.863116 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:55:05.863125 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:55:05.863134 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:55:05.863142 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:55:05.863150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:55:05.863159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:55:05.863167 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:55:05.863176 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:55:05.863184 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:55:05.863191 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:55:05.863199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:55:05.863207 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:55:05.863223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:55:05.863232 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:55:05.863240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:55:05.863248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:55:05.863258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:55:05.863266 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:55:05.863274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:55:05.863282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:55:05.863290 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:55:05.863298 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:55:05.863306 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:55:05.863314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:55:05.863323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:55:05.863332 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:55:05.863340 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:55:05.863347 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:55:05.863377 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 23:55:05.863399 systemd-journald[237]: Journal started Sep 5 23:55:05.863424 systemd-journald[237]: Runtime Journal (/run/log/journal/2db5bca78e7d4f3db11ab530201b6f63) is 5.9M, max 47.3M, 41.4M free. Sep 5 23:55:05.867388 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:55:05.867417 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:55:05.855758 systemd-modules-load[238]: Inserted module 'overlay' Sep 5 23:55:05.870498 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 5 23:55:05.871872 kernel: Bridge firewalling registered Sep 5 23:55:05.871898 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:55:05.873004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:55:05.874109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:55:05.877295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:55:05.881125 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:55:05.882787 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:55:05.895446 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:55:05.897424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:55:05.902669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:55:05.906660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:55:05.910277 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:55:05.918415 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:55:05.919468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:55:05.922069 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:55:05.936612 dracut-cmdline[278]: dracut-dracut-053 Sep 5 23:55:05.939114 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:55:05.943742 systemd-resolved[275]: Positive Trust Anchors: Sep 5 23:55:05.943760 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:55:05.943792 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:55:05.948581 systemd-resolved[275]: Defaulting to hostname 'linux'. Sep 5 23:55:05.949608 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:55:05.952416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:55:06.011257 kernel: SCSI subsystem initialized Sep 5 23:55:06.016239 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:55:06.023246 kernel: iscsi: registered transport (tcp) Sep 5 23:55:06.036330 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:55:06.036389 kernel: QLogic iSCSI HBA Driver Sep 5 23:55:06.078984 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:55:06.087403 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:55:06.104147 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:55:06.104228 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:55:06.104241 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:55:06.151258 kernel: raid6: neonx8 gen() 15656 MB/s Sep 5 23:55:06.168251 kernel: raid6: neonx4 gen() 14982 MB/s Sep 5 23:55:06.185243 kernel: raid6: neonx2 gen() 13221 MB/s Sep 5 23:55:06.202241 kernel: raid6: neonx1 gen() 10495 MB/s Sep 5 23:55:06.219242 kernel: raid6: int64x8 gen() 6956 MB/s Sep 5 23:55:06.236235 kernel: raid6: int64x4 gen() 7343 MB/s Sep 5 23:55:06.253249 kernel: raid6: int64x2 gen() 6128 MB/s Sep 5 23:55:06.270244 kernel: raid6: int64x1 gen() 5052 MB/s Sep 5 23:55:06.270280 kernel: raid6: using algorithm neonx8 gen() 15656 MB/s Sep 5 23:55:06.287257 kernel: raid6: .... xor() 11615 MB/s, rmw enabled Sep 5 23:55:06.287295 kernel: raid6: using neon recovery algorithm Sep 5 23:55:06.292614 kernel: xor: measuring software checksum speed Sep 5 23:55:06.292640 kernel: 8regs : 19745 MB/sec Sep 5 23:55:06.293229 kernel: 32regs : 19249 MB/sec Sep 5 23:55:06.294258 kernel: arm64_neon : 25022 MB/sec Sep 5 23:55:06.294279 kernel: xor: using function: arm64_neon (25022 MB/sec) Sep 5 23:55:06.343252 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:55:06.354594 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:55:06.366411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:55:06.378341 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 5 23:55:06.381608 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:55:06.385068 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:55:06.399647 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Sep 5 23:55:06.427792 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:55:06.442458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:55:06.484113 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:55:06.491398 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:55:06.504408 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:55:06.505978 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:55:06.507089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:55:06.509139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:55:06.515379 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:55:06.528908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:55:06.534254 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 23:55:06.536256 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 23:55:06.543235 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:55:06.543277 kernel: GPT:9289727 != 19775487 Sep 5 23:55:06.543288 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:55:06.545330 kernel: GPT:9289727 != 19775487 Sep 5 23:55:06.545355 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:55:06.546228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:55:06.547829 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:55:06.547903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:55:06.549939 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:55:06.552290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:55:06.552350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:55:06.554981 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:55:06.564243 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (507) Sep 5 23:55:06.564276 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (509) Sep 5 23:55:06.566400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:55:06.580504 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 23:55:06.583260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:55:06.591499 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 23:55:06.598467 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 23:55:06.599463 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 23:55:06.604852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 23:55:06.616410 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:55:06.618506 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:55:06.624015 disk-uuid[551]: Primary Header is updated. Sep 5 23:55:06.624015 disk-uuid[551]: Secondary Entries is updated. Sep 5 23:55:06.624015 disk-uuid[551]: Secondary Header is updated. Sep 5 23:55:06.629234 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:55:06.636238 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:55:06.642650 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:55:07.636095 disk-uuid[552]: The operation has completed successfully. Sep 5 23:55:07.637388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:55:07.657990 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:55:07.658085 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:55:07.681381 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:55:07.684234 sh[573]: Success Sep 5 23:55:07.693268 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:55:07.722346 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:55:07.742597 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:55:07.744703 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:55:07.756927 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:55:07.756964 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:55:07.756975 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:55:07.756985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:55:07.756995 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:55:07.760992 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:55:07.762162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:55:07.762964 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:55:07.764957 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:55:07.778761 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:55:07.778816 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:55:07.778827 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:55:07.781273 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 23:55:07.789624 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:55:07.790894 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:55:07.797824 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:55:07.807396 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:55:07.854697 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:55:07.863375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:55:07.872020 ignition[672]: Ignition 2.19.0 Sep 5 23:55:07.872030 ignition[672]: Stage: fetch-offline Sep 5 23:55:07.872072 ignition[672]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:55:07.872080 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:55:07.872248 ignition[672]: parsed url from cmdline: "" Sep 5 23:55:07.872251 ignition[672]: no config URL provided Sep 5 23:55:07.872256 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:55:07.872263 ignition[672]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:55:07.872286 ignition[672]: op(1): [started] loading QEMU firmware config module Sep 5 23:55:07.872290 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 23:55:07.878608 ignition[672]: op(1): [finished] loading QEMU firmware config module Sep 5 23:55:07.881953 systemd-networkd[762]: lo: Link UP Sep 5 23:55:07.881963 systemd-networkd[762]: lo: Gained carrier Sep 5 23:55:07.882632 systemd-networkd[762]: Enumeration completed Sep 5 23:55:07.882732 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:55:07.883031 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:55:07.883034 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:55:07.883858 systemd-networkd[762]: eth0: Link UP Sep 5 23:55:07.883861 systemd-networkd[762]: eth0: Gained carrier Sep 5 23:55:07.883867 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:55:07.886373 systemd[1]: Reached target network.target - Network. Sep 5 23:55:07.905271 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 23:55:07.930234 ignition[672]: parsing config with SHA512: 251dd18ce94f0202202d0757a1ae718e78760bd1b51db397f00013fb45d942cc129cc1857c9af946b7228a992a914edabefe0b8571276d76a5b4cfc93d731a23 Sep 5 23:55:07.935301 unknown[672]: fetched base config from "system" Sep 5 23:55:07.935317 unknown[672]: fetched user config from "qemu" Sep 5 23:55:07.935963 ignition[672]: fetch-offline: fetch-offline passed Sep 5 23:55:07.936372 ignition[672]: Ignition finished successfully Sep 5 23:55:07.938061 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:55:07.939552 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 23:55:07.951372 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:55:07.961919 ignition[770]: Ignition 2.19.0 Sep 5 23:55:07.961935 ignition[770]: Stage: kargs Sep 5 23:55:07.962121 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:55:07.962130 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:55:07.963071 ignition[770]: kargs: kargs passed Sep 5 23:55:07.963117 ignition[770]: Ignition finished successfully Sep 5 23:55:07.965170 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:55:07.979397 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:55:07.989463 ignition[778]: Ignition 2.19.0 Sep 5 23:55:07.989473 ignition[778]: Stage: disks Sep 5 23:55:07.989632 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:55:07.989642 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:55:07.990526 ignition[778]: disks: disks passed Sep 5 23:55:07.990568 ignition[778]: Ignition finished successfully Sep 5 23:55:07.993268 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:55:07.994954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:55:07.995882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:55:07.997475 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:55:07.999020 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:55:08.000406 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:55:08.011381 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:55:08.021180 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 23:55:08.027269 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:55:08.029047 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:55:08.073242 kernel: EXT4-fs (vda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:55:08.073368 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:55:08.074463 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:55:08.086350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:55:08.087979 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:55:08.088965 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 23:55:08.089008 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:55:08.089033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:55:08.095967 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (796) Sep 5 23:55:08.095988 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:55:08.095105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:55:08.099301 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:55:08.099320 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:55:08.100667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:55:08.102796 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 23:55:08.105319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:55:08.136923 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:55:08.140259 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:55:08.143390 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:55:08.146363 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:55:08.220865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:55:08.231326 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:55:08.232769 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:55:08.239234 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:55:08.253370 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:55:08.260321 ignition[910]: INFO : Ignition 2.19.0 Sep 5 23:55:08.260321 ignition[910]: INFO : Stage: mount Sep 5 23:55:08.261725 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:55:08.261725 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:55:08.264198 ignition[910]: INFO : mount: mount passed Sep 5 23:55:08.264198 ignition[910]: INFO : Ignition finished successfully Sep 5 23:55:08.264700 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:55:08.286348 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:55:08.754557 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:55:08.763421 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:55:08.769228 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (924) Sep 5 23:55:08.770991 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:55:08.771018 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:55:08.771029 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:55:08.773226 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 23:55:08.774713 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:55:08.790904 ignition[941]: INFO : Ignition 2.19.0 Sep 5 23:55:08.790904 ignition[941]: INFO : Stage: files Sep 5 23:55:08.792235 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:55:08.792235 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:55:08.792235 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:55:08.795117 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:55:08.795117 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:55:08.795117 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:55:08.798582 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:55:08.798582 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:55:08.798582 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:55:08.798582 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:55:08.798582 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:55:08.798582 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 23:55:08.795549 unknown[941]: wrote ssh authorized keys file for user: core Sep 5 23:55:08.870631 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 23:55:09.479384 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:55:09.479384 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:55:09.482591 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 23:55:09.762368 systemd-networkd[762]: eth0: Gained IPv6LL Sep 5 23:55:09.840764 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 23:55:10.174402 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:55:10.174402 ignition[941]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 5 23:55:10.177275 ignition[941]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 23:55:10.201486 ignition[941]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 23:55:10.205704 ignition[941]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 23:55:10.209663 ignition[941]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 23:55:10.209663 ignition[941]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:55:10.209663 ignition[941]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:55:10.209663 ignition[941]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:55:10.209663 ignition[941]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:55:10.209663 ignition[941]: INFO : files: files passed Sep 5 23:55:10.209663 ignition[941]: INFO : Ignition finished successfully Sep 5 23:55:10.208443 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:55:10.217427 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:55:10.219756 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:55:10.223267 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:55:10.223478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:55:10.228141 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 23:55:10.233295 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:55:10.234716 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:55:10.234716 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:55:10.236992 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:55:10.238361 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:55:10.246406 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:55:10.270357 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:55:10.270489 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:55:10.272326 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:55:10.273787 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:55:10.275139 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:55:10.275958 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:55:10.294077 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:55:10.304386 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:55:10.312991 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:55:10.314104 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:55:10.315800 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:55:10.317136 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:55:10.317275 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:55:10.319235 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:55:10.320891 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:55:10.322256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:55:10.323626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:55:10.325187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:55:10.326842 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:55:10.328243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:55:10.329916 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:55:10.331457 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:55:10.332883 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:55:10.334024 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:55:10.334155 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:55:10.336079 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:55:10.337746 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:55:10.339281 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:55:10.340313 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:55:10.341901 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:55:10.342028 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:55:10.344254 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:55:10.344373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:55:10.345965 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:55:10.347197 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:55:10.347395 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:55:10.350929 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:55:10.354136 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:55:10.355572 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:55:10.355672 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:55:10.357371 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:55:10.357474 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:55:10.358551 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:55:10.360024 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:55:10.362195 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:55:10.362363 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:55:10.378468 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:55:10.379967 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:55:10.380698 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:55:10.380822 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:55:10.382380 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:55:10.382546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:55:10.387307 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:55:10.387413 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:55:10.393508 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:55:10.395368 ignition[996]: INFO : Ignition 2.19.0 Sep 5 23:55:10.395368 ignition[996]: INFO : Stage: umount Sep 5 23:55:10.397742 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:55:10.397742 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:55:10.397742 ignition[996]: INFO : umount: umount passed Sep 5 23:55:10.397742 ignition[996]: INFO : Ignition finished successfully Sep 5 23:55:10.398830 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:55:10.398933 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:55:10.400771 systemd[1]: Stopped target network.target - Network. Sep 5 23:55:10.401931 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:55:10.402002 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:55:10.403290 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:55:10.403336 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:55:10.404716 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:55:10.404754 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:55:10.406174 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:55:10.406230 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:55:10.407770 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:55:10.409028 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:55:10.419257 systemd-networkd[762]: eth0: DHCPv6 lease lost Sep 5 23:55:10.420857 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:55:10.420995 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:55:10.423547 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:55:10.423590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:55:10.431411 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:55:10.432081 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:55:10.432140 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:55:10.433926 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:55:10.436866 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:55:10.436965 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:55:10.440483 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:55:10.440590 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:55:10.442429 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:55:10.442493 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:55:10.443566 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:55:10.443611 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:55:10.446729 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:55:10.446835 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:55:10.449985 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:55:10.450139 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:55:10.452265 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:55:10.452355 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:55:10.454435 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:55:10.454492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:55:10.455652 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:55:10.455690 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:55:10.457310 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:55:10.457361 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:55:10.459453 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:55:10.459502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:55:10.461748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:55:10.461794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:55:10.464352 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:55:10.464405 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:55:10.485461 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:55:10.486299 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:55:10.486358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:55:10.488119 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:55:10.488160 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:55:10.489828 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:55:10.489864 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:55:10.491632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:55:10.491678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:55:10.493518 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:55:10.494303 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:55:10.496610 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:55:10.499075 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:55:10.509671 systemd[1]: Switching root. Sep 5 23:55:10.534456 systemd-journald[237]: Journal stopped Sep 5 23:55:11.236695 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 23:55:11.236760 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:55:11.236773 kernel: SELinux: policy capability open_perms=1 Sep 5 23:55:11.236783 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:55:11.236794 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:55:11.236808 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:55:11.236818 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:55:11.236829 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:55:11.236841 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:55:11.236851 kernel: audit: type=1403 audit(1757116510.713:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:55:11.236862 systemd[1]: Successfully loaded SELinux policy in 41.708ms. Sep 5 23:55:11.236879 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.601ms. Sep 5 23:55:11.236891 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:55:11.236903 systemd[1]: Detected virtualization kvm. Sep 5 23:55:11.236913 systemd[1]: Detected architecture arm64. Sep 5 23:55:11.236923 systemd[1]: Detected first boot. Sep 5 23:55:11.236936 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:55:11.236947 zram_generator::config[1064]: No configuration found. Sep 5 23:55:11.236958 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:55:11.236969 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:55:11.236980 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 23:55:11.236992 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:55:11.237003 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:55:11.237013 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:55:11.237024 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:55:11.237037 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:55:11.237050 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:55:11.237060 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:55:11.237071 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:55:11.237081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:55:11.237092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:55:11.237103 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:55:11.237113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:55:11.237125 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:55:11.237136 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:55:11.237146 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:55:11.237157 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:55:11.237168 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:55:11.237178 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:55:11.237188 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:55:11.237199 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:55:11.237211 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:55:11.237240 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:55:11.237251 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:55:11.237261 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:55:11.237271 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:55:11.237281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:55:11.237293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:55:11.237304 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:55:11.237314 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:55:11.237325 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:55:11.237338 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:55:11.237348 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:55:11.237359 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:55:11.237370 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:55:11.237381 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:55:11.237391 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:55:11.237402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:55:11.237413 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:55:11.237424 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:55:11.237436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:55:11.237447 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:55:11.237457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:55:11.237468 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:55:11.237480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:55:11.237490 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:55:11.237501 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 5 23:55:11.237512 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 5 23:55:11.237524 kernel: fuse: init (API version 7.39) Sep 5 23:55:11.237534 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:55:11.237544 kernel: loop: module loaded Sep 5 23:55:11.237554 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:55:11.237564 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:55:11.237574 kernel: ACPI: bus type drm_connector registered Sep 5 23:55:11.237584 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:55:11.237595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:55:11.237605 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:55:11.237617 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:55:11.237628 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:55:11.237671 systemd-journald[1147]: Collecting audit messages is disabled. Sep 5 23:55:11.237698 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:55:11.237709 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:55:11.237722 systemd-journald[1147]: Journal started Sep 5 23:55:11.237745 systemd-journald[1147]: Runtime Journal (/run/log/journal/2db5bca78e7d4f3db11ab530201b6f63) is 5.9M, max 47.3M, 41.4M free. Sep 5 23:55:11.241847 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:55:11.241719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:55:11.243011 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:55:11.244314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:55:11.245504 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:55:11.245693 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:55:11.246927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:55:11.247089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:55:11.248327 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:55:11.248485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:55:11.249727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:55:11.249884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:55:11.251319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:55:11.251471 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:55:11.252538 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:55:11.252779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:55:11.254134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:55:11.255411 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:55:11.256806 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:55:11.268123 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:55:11.278374 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:55:11.280427 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:55:11.281325 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:55:11.286024 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:55:11.290529 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:55:11.291576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:55:11.294514 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:55:11.295488 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:55:11.296750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:55:11.299493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:55:11.301130 systemd-journald[1147]: Time spent on flushing to /var/log/journal/2db5bca78e7d4f3db11ab530201b6f63 is 14.450ms for 845 entries. Sep 5 23:55:11.301130 systemd-journald[1147]: System Journal (/var/log/journal/2db5bca78e7d4f3db11ab530201b6f63) is 8.0M, max 195.6M, 187.6M free. Sep 5 23:55:11.325578 systemd-journald[1147]: Received client request to flush runtime journal. Sep 5 23:55:11.301897 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:55:11.304563 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:55:11.305729 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:55:11.310621 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:55:11.314708 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:55:11.316091 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:55:11.330626 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:55:11.332550 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 5 23:55:11.332689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:55:11.332934 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 5 23:55:11.334798 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 23:55:11.337875 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:55:11.349493 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:55:11.368021 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:55:11.380411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:55:11.392370 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 5 23:55:11.392391 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 5 23:55:11.396306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:55:11.780177 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:55:11.797454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:55:11.817935 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Sep 5 23:55:11.830967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:55:11.844523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:55:11.848650 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:55:11.859790 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 5 23:55:11.921192 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:55:11.923236 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1234) Sep 5 23:55:11.944954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 23:55:11.953851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:55:11.961533 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:55:11.964371 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:55:11.979614 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:55:11.989122 systemd-networkd[1231]: lo: Link UP Sep 5 23:55:11.989130 systemd-networkd[1231]: lo: Gained carrier Sep 5 23:55:11.989856 systemd-networkd[1231]: Enumeration completed Sep 5 23:55:11.990353 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:55:11.991069 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:55:11.991073 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:55:11.991760 systemd-networkd[1231]: eth0: Link UP Sep 5 23:55:11.991770 systemd-networkd[1231]: eth0: Gained carrier Sep 5 23:55:11.991783 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:55:11.999392 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:55:12.000986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:55:12.003665 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:55:12.004930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:55:12.006295 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 23:55:12.007368 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:55:12.015144 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:55:12.062936 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:55:12.064187 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:55:12.065154 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:55:12.065186 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:55:12.066039 systemd[1]: Reached target machines.target - Containers. Sep 5 23:55:12.067849 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:55:12.081847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:55:12.084136 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:55:12.085304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:55:12.088385 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:55:12.090832 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:55:12.095397 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:55:12.097402 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:55:12.101803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:55:12.110404 kernel: loop0: detected capacity change from 0 to 114432 Sep 5 23:55:12.113388 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:55:12.114255 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:55:12.119412 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:55:12.148257 kernel: loop1: detected capacity change from 0 to 114328 Sep 5 23:55:12.182283 kernel: loop2: detected capacity change from 0 to 203944 Sep 5 23:55:12.216241 kernel: loop3: detected capacity change from 0 to 114432 Sep 5 23:55:12.222369 kernel: loop4: detected capacity change from 0 to 114328 Sep 5 23:55:12.227280 kernel: loop5: detected capacity change from 0 to 203944 Sep 5 23:55:12.230923 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 23:55:12.231742 (sd-merge)[1288]: Merged extensions into '/usr'. Sep 5 23:55:12.235617 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:55:12.235643 systemd[1]: Reloading... Sep 5 23:55:12.278356 zram_generator::config[1316]: No configuration found. Sep 5 23:55:12.338191 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:55:12.383463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:55:12.427471 systemd[1]: Reloading finished in 191 ms. Sep 5 23:55:12.441097 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:55:12.442418 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:55:12.456409 systemd[1]: Starting ensure-sysext.service... Sep 5 23:55:12.458230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:55:12.461553 systemd[1]: Reloading requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:55:12.461573 systemd[1]: Reloading... Sep 5 23:55:12.474784 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:55:12.475051 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:55:12.475765 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:55:12.476101 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 5 23:55:12.476147 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 5 23:55:12.478533 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:55:12.478543 systemd-tmpfiles[1358]: Skipping /boot Sep 5 23:55:12.485749 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:55:12.485767 systemd-tmpfiles[1358]: Skipping /boot Sep 5 23:55:12.515247 zram_generator::config[1389]: No configuration found. Sep 5 23:55:12.599580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:55:12.644100 systemd[1]: Reloading finished in 182 ms. Sep 5 23:55:12.658380 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:55:12.675817 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:55:12.679998 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:55:12.682801 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:55:12.685653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:55:12.689173 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:55:12.701327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:55:12.702664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:55:12.705370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:55:12.710466 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:55:12.712522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:55:12.714404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:55:12.714586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:55:12.717888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:55:12.718047 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:55:12.726714 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:55:12.728331 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:55:12.729340 augenrules[1456]: No rules Sep 5 23:55:12.733791 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:55:12.735312 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:55:12.737500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:55:12.744751 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:55:12.746406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:55:12.756591 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:55:12.760490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:55:12.762531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:55:12.763579 systemd-resolved[1433]: Positive Trust Anchors: Sep 5 23:55:12.763598 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:55:12.763638 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:55:12.766509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:55:12.767601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:55:12.769692 systemd-resolved[1433]: Defaulting to hostname 'linux'. Sep 5 23:55:12.771503 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:55:12.772431 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:55:12.773516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:55:12.774835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:55:12.775006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:55:12.776526 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:55:12.776695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:55:12.777955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:55:12.778111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:55:12.779732 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:55:12.779939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:55:12.783323 systemd[1]: Finished ensure-sysext.service. Sep 5 23:55:12.784584 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:55:12.789980 systemd[1]: Reached target network.target - Network. Sep 5 23:55:12.790808 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:55:12.792070 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:55:12.792147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:55:12.799412 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 23:55:12.841143 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 23:55:12.417628 systemd-resolved[1433]: Clock change detected. Flushing caches. Sep 5 23:55:12.422553 systemd-journald[1147]: Time jumped backwards, rotating. Sep 5 23:55:12.417678 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 23:55:12.417719 systemd-timesyncd[1490]: Initial clock synchronization to Fri 2025-09-05 23:55:12.417575 UTC. Sep 5 23:55:12.418997 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:55:12.419956 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:55:12.421113 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:55:12.422183 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:55:12.423196 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:55:12.423232 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:55:12.423898 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:55:12.424898 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:55:12.426336 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:55:12.427507 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:55:12.429328 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:55:12.431811 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:55:12.433885 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:55:12.447125 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:55:12.448014 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:55:12.448752 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:55:12.449691 systemd[1]: System is tainted: cgroupsv1 Sep 5 23:55:12.449745 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:55:12.449770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:55:12.451065 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:55:12.453025 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:55:12.454919 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:55:12.460088 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:55:12.461108 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:55:12.462431 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:55:12.463915 jq[1497]: false Sep 5 23:55:12.465031 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:55:12.470210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:55:12.476150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:55:12.476850 dbus-daemon[1496]: [system] SELinux support is enabled Sep 5 23:55:12.480715 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:55:12.482767 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:55:12.483509 extend-filesystems[1499]: Found loop3 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found loop4 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found loop5 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda1 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda2 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda3 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found usr Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda4 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda6 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda7 Sep 5 23:55:12.484354 extend-filesystems[1499]: Found vda9 Sep 5 23:55:12.484354 extend-filesystems[1499]: Checking size of /dev/vda9 Sep 5 23:55:12.487157 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:55:12.490362 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:55:12.494825 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:55:12.500373 extend-filesystems[1499]: Resized partition /dev/vda9 Sep 5 23:55:12.505805 extend-filesystems[1524]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:55:12.513719 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:55:12.513974 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:55:12.514365 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:55:12.514584 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:55:12.517978 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1229) Sep 5 23:55:12.518044 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 23:55:12.518058 jq[1517]: true Sep 5 23:55:12.523561 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:55:12.523937 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:55:12.540925 jq[1529]: true Sep 5 23:55:12.543427 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 23:55:12.550983 (ntainerd)[1538]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:55:12.552437 update_engine[1514]: I20250905 23:55:12.552211 1514 main.cc:92] Flatcar Update Engine starting Sep 5 23:55:12.555525 update_engine[1514]: I20250905 23:55:12.554016 1514 update_check_scheduler.cc:74] Next update check in 8m16s Sep 5 23:55:12.555752 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:55:12.558699 tar[1528]: linux-arm64/helm Sep 5 23:55:12.559191 extend-filesystems[1524]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 23:55:12.559191 extend-filesystems[1524]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 23:55:12.559191 extend-filesystems[1524]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 23:55:12.566931 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Sep 5 23:55:12.559409 systemd-logind[1510]: New seat seat0. Sep 5 23:55:12.563261 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:55:12.563517 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:55:12.565458 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:55:12.577478 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:55:12.579906 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:55:12.580220 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:55:12.581505 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:55:12.581607 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:55:12.583232 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:55:12.592255 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:55:12.623851 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:55:12.626072 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:55:12.629733 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 23:55:12.650000 locksmithd[1550]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:55:12.719527 containerd[1538]: time="2025-09-05T23:55:12.719389892Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:55:12.745270 containerd[1538]: time="2025-09-05T23:55:12.745216732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:55:12.746607 containerd[1538]: time="2025-09-05T23:55:12.746552892Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:55:12.746607 containerd[1538]: time="2025-09-05T23:55:12.746584692Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:55:12.746607 containerd[1538]: time="2025-09-05T23:55:12.746599652Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:55:12.746768 containerd[1538]: time="2025-09-05T23:55:12.746749972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:55:12.746787 containerd[1538]: time="2025-09-05T23:55:12.746773972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:55:12.746853 containerd[1538]: time="2025-09-05T23:55:12.746836092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:55:12.746875 containerd[1538]: time="2025-09-05T23:55:12.746852612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747092 containerd[1538]: time="2025-09-05T23:55:12.747056412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747092 containerd[1538]: time="2025-09-05T23:55:12.747084252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747136 containerd[1538]: time="2025-09-05T23:55:12.747097692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747136 containerd[1538]: time="2025-09-05T23:55:12.747107532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747194 containerd[1538]: time="2025-09-05T23:55:12.747178532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747403 containerd[1538]: time="2025-09-05T23:55:12.747364412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747553 containerd[1538]: time="2025-09-05T23:55:12.747533132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:55:12.747576 containerd[1538]: time="2025-09-05T23:55:12.747552252Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:55:12.747643 containerd[1538]: time="2025-09-05T23:55:12.747628172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:55:12.747684 containerd[1538]: time="2025-09-05T23:55:12.747672932Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:55:12.751037 containerd[1538]: time="2025-09-05T23:55:12.751010412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:55:12.751087 containerd[1538]: time="2025-09-05T23:55:12.751057932Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:55:12.751087 containerd[1538]: time="2025-09-05T23:55:12.751077052Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:55:12.751122 containerd[1538]: time="2025-09-05T23:55:12.751092372Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:55:12.751122 containerd[1538]: time="2025-09-05T23:55:12.751107372Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:55:12.751348 containerd[1538]: time="2025-09-05T23:55:12.751248332Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:55:12.751568 containerd[1538]: time="2025-09-05T23:55:12.751549212Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:55:12.751675 containerd[1538]: time="2025-09-05T23:55:12.751660132Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:55:12.751697 containerd[1538]: time="2025-09-05T23:55:12.751680652Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:55:12.751714 containerd[1538]: time="2025-09-05T23:55:12.751699412Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:55:12.751739 containerd[1538]: time="2025-09-05T23:55:12.751712892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751739 containerd[1538]: time="2025-09-05T23:55:12.751724572Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751773 containerd[1538]: time="2025-09-05T23:55:12.751741652Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751773 containerd[1538]: time="2025-09-05T23:55:12.751755532Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751773 containerd[1538]: time="2025-09-05T23:55:12.751770172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751822 containerd[1538]: time="2025-09-05T23:55:12.751782012Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751822 containerd[1538]: time="2025-09-05T23:55:12.751793612Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751822 containerd[1538]: time="2025-09-05T23:55:12.751803892Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:55:12.751870 containerd[1538]: time="2025-09-05T23:55:12.751821652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.751870 containerd[1538]: time="2025-09-05T23:55:12.751841852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.751870 containerd[1538]: time="2025-09-05T23:55:12.751854452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.751924 containerd[1538]: time="2025-09-05T23:55:12.751870612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.751924 containerd[1538]: time="2025-09-05T23:55:12.751883412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.751924 containerd[1538]: time="2025-09-05T23:55:12.751895652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.751924 containerd[1538]: time="2025-09-05T23:55:12.751907172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.751924 containerd[1538]: time="2025-09-05T23:55:12.751920052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752025 containerd[1538]: time="2025-09-05T23:55:12.751932812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752025 containerd[1538]: time="2025-09-05T23:55:12.751948132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752025 containerd[1538]: time="2025-09-05T23:55:12.751979492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752025 containerd[1538]: time="2025-09-05T23:55:12.751992772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752025 containerd[1538]: time="2025-09-05T23:55:12.752004612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752025 containerd[1538]: time="2025-09-05T23:55:12.752019892Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:55:12.752120 containerd[1538]: time="2025-09-05T23:55:12.752039532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752120 containerd[1538]: time="2025-09-05T23:55:12.752051212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752120 containerd[1538]: time="2025-09-05T23:55:12.752061332Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752166772Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752185092Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752195132Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752206332Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752216852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752233972Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752244972Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:55:12.752281 containerd[1538]: time="2025-09-05T23:55:12.752257412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:55:12.752797 containerd[1538]: time="2025-09-05T23:55:12.752670572Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:55:12.752797 containerd[1538]: time="2025-09-05T23:55:12.752727572Z" level=info msg="Connect containerd service" Sep 5 23:55:12.752797 containerd[1538]: time="2025-09-05T23:55:12.752758212Z" level=info msg="using legacy CRI server" Sep 5 23:55:12.752797 containerd[1538]: time="2025-09-05T23:55:12.752765132Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:55:12.753080 containerd[1538]: time="2025-09-05T23:55:12.752853612Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:55:12.753477 containerd[1538]: time="2025-09-05T23:55:12.753434612Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:55:12.753836 containerd[1538]: time="2025-09-05T23:55:12.753693812Z" level=info msg="Start subscribing containerd event" Sep 5 23:55:12.753836 containerd[1538]: time="2025-09-05T23:55:12.753759332Z" level=info msg="Start recovering state" Sep 5 23:55:12.753992 containerd[1538]: time="2025-09-05T23:55:12.753909612Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:55:12.754065 containerd[1538]: time="2025-09-05T23:55:12.754047812Z" level=info msg="Start event monitor" Sep 5 23:55:12.754122 containerd[1538]: time="2025-09-05T23:55:12.754110172Z" level=info msg="Start snapshots syncer" Sep 5 23:55:12.754311 containerd[1538]: time="2025-09-05T23:55:12.754158452Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:55:12.754311 containerd[1538]: time="2025-09-05T23:55:12.754074732Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:55:12.754311 containerd[1538]: time="2025-09-05T23:55:12.754174012Z" level=info msg="Start streaming server" Sep 5 23:55:12.754420 containerd[1538]: time="2025-09-05T23:55:12.754319892Z" level=info msg="containerd successfully booted in 0.036680s" Sep 5 23:55:12.754435 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:55:12.898752 tar[1528]: linux-arm64/LICENSE Sep 5 23:55:12.899299 tar[1528]: linux-arm64/README.md Sep 5 23:55:12.914534 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:55:13.112836 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:55:13.132759 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:55:13.144285 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:55:13.150164 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:55:13.150435 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:55:13.153109 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:55:13.165599 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:55:13.168530 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:55:13.170839 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:55:13.172327 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:55:13.434093 systemd-networkd[1231]: eth0: Gained IPv6LL Sep 5 23:55:13.436798 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:55:13.438367 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:55:13.449252 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 23:55:13.451739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:13.453760 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:55:13.471730 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 23:55:13.471997 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 23:55:13.473360 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:55:13.475210 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:55:14.006004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:14.007268 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:55:14.008367 systemd[1]: Startup finished in 5.549s (kernel) + 3.761s (userspace) = 9.310s. Sep 5 23:55:14.010762 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:55:14.372008 kubelet[1633]: E0905 23:55:14.371882 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:55:14.374641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:55:14.374818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:55:17.994546 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:55:18.011702 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:50128.service - OpenSSH per-connection server daemon (10.0.0.1:50128). Sep 5 23:55:18.063511 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 50128 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:55:18.065552 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:18.078757 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:55:18.094871 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:55:18.099478 systemd-logind[1510]: New session 1 of user core. Sep 5 23:55:18.107690 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:55:18.110380 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:55:18.130069 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:55:18.209724 systemd[1651]: Queued start job for default target default.target. Sep 5 23:55:18.210095 systemd[1651]: Created slice app.slice - User Application Slice. Sep 5 23:55:18.210118 systemd[1651]: Reached target paths.target - Paths. Sep 5 23:55:18.210129 systemd[1651]: Reached target timers.target - Timers. Sep 5 23:55:18.229096 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:55:18.235101 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:55:18.235161 systemd[1651]: Reached target sockets.target - Sockets. Sep 5 23:55:18.235173 systemd[1651]: Reached target basic.target - Basic System. Sep 5 23:55:18.235208 systemd[1651]: Reached target default.target - Main User Target. Sep 5 23:55:18.235232 systemd[1651]: Startup finished in 98ms. Sep 5 23:55:18.235525 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:55:18.236957 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:55:18.293219 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:50138.service - OpenSSH per-connection server daemon (10.0.0.1:50138). Sep 5 23:55:18.331777 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 50138 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:55:18.333407 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:18.338056 systemd-logind[1510]: New session 2 of user core. Sep 5 23:55:18.351322 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:55:18.404615 sshd[1664]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:18.415315 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:50148.service - OpenSSH per-connection server daemon (10.0.0.1:50148). Sep 5 23:55:18.415722 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:50138.service: Deactivated successfully. Sep 5 23:55:18.418243 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:55:18.419618 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:55:18.421840 systemd-logind[1510]: Removed session 2. Sep 5 23:55:18.449684 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 50148 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:55:18.451031 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:18.457421 systemd-logind[1510]: New session 3 of user core. Sep 5 23:55:18.468956 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:55:18.520003 sshd[1669]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:18.532399 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:50150.service - OpenSSH per-connection server daemon (10.0.0.1:50150). Sep 5 23:55:18.532820 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:50148.service: Deactivated successfully. Sep 5 23:55:18.534277 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:55:18.551745 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:55:18.555694 systemd-logind[1510]: Removed session 3. Sep 5 23:55:18.576699 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 50150 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:55:18.577934 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:18.582822 systemd-logind[1510]: New session 4 of user core. Sep 5 23:55:18.594250 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:55:18.649308 sshd[1678]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:18.655209 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:50164.service - OpenSSH per-connection server daemon (10.0.0.1:50164). Sep 5 23:55:18.655595 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:50150.service: Deactivated successfully. Sep 5 23:55:18.658798 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:55:18.659522 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:55:18.661229 systemd-logind[1510]: Removed session 4. Sep 5 23:55:18.690642 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 50164 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:55:18.691933 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:18.696036 systemd-logind[1510]: New session 5 of user core. Sep 5 23:55:18.703240 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:55:18.761501 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:55:18.761780 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:18.784512 sudo[1692]: pam_unix(sudo:session): session closed for user root Sep 5 23:55:18.786793 sshd[1685]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:18.804253 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:50180.service - OpenSSH per-connection server daemon (10.0.0.1:50180). Sep 5 23:55:18.804688 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:50164.service: Deactivated successfully. Sep 5 23:55:18.807314 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:55:18.809781 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:55:18.812004 systemd-logind[1510]: Removed session 5. Sep 5 23:55:18.843938 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 50180 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:55:18.845518 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:18.851323 systemd-logind[1510]: New session 6 of user core. Sep 5 23:55:18.858258 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:55:18.914944 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:55:18.915697 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:18.920949 sudo[1702]: pam_unix(sudo:session): session closed for user root Sep 5 23:55:18.925602 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:55:18.926134 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:18.954316 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:55:18.956190 auditctl[1705]: No rules Sep 5 23:55:18.957163 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:55:18.957425 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:55:18.959653 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:55:18.991830 augenrules[1724]: No rules Sep 5 23:55:18.992908 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:55:18.994018 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 5 23:55:18.996662 sshd[1694]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:19.007265 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:50186.service - OpenSSH per-connection server daemon (10.0.0.1:50186). Sep 5 23:55:19.007730 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:50180.service: Deactivated successfully. Sep 5 23:55:19.012132 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:55:19.012691 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:55:19.014042 systemd-logind[1510]: Removed session 6. Sep 5 23:55:19.044667 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 50186 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:55:19.045876 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:19.050629 systemd-logind[1510]: New session 7 of user core. Sep 5 23:55:19.058193 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:55:19.111908 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:55:19.112205 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:19.388216 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:55:19.388447 (dockerd)[1755]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:55:19.605503 dockerd[1755]: time="2025-09-05T23:55:19.605431452Z" level=info msg="Starting up" Sep 5 23:55:19.902861 dockerd[1755]: time="2025-09-05T23:55:19.902810652Z" level=info msg="Loading containers: start." Sep 5 23:55:20.000101 kernel: Initializing XFRM netlink socket Sep 5 23:55:20.098869 systemd-networkd[1231]: docker0: Link UP Sep 5 23:55:20.119236 dockerd[1755]: time="2025-09-05T23:55:20.119200652Z" level=info msg="Loading containers: done." Sep 5 23:55:20.131118 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck478087501-merged.mount: Deactivated successfully. Sep 5 23:55:20.132415 dockerd[1755]: time="2025-09-05T23:55:20.132291732Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:55:20.132415 dockerd[1755]: time="2025-09-05T23:55:20.132391772Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:55:20.132525 dockerd[1755]: time="2025-09-05T23:55:20.132493732Z" level=info msg="Daemon has completed initialization" Sep 5 23:55:20.164833 dockerd[1755]: time="2025-09-05T23:55:20.164623132Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:55:20.165008 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:55:20.831157 containerd[1538]: time="2025-09-05T23:55:20.831120012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 23:55:21.459731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651998747.mount: Deactivated successfully. Sep 5 23:55:22.425049 containerd[1538]: time="2025-09-05T23:55:22.424512412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:22.426339 containerd[1538]: time="2025-09-05T23:55:22.426070412Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 5 23:55:22.427201 containerd[1538]: time="2025-09-05T23:55:22.427165732Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:22.430207 containerd[1538]: time="2025-09-05T23:55:22.430178932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:22.431522 containerd[1538]: time="2025-09-05T23:55:22.431485812Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.60032716s" Sep 5 23:55:22.431580 containerd[1538]: time="2025-09-05T23:55:22.431523412Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 23:55:22.432730 containerd[1538]: time="2025-09-05T23:55:22.432708572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 23:55:23.474492 containerd[1538]: time="2025-09-05T23:55:23.474447332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:23.475211 containerd[1538]: time="2025-09-05T23:55:23.475183852Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 5 23:55:23.476276 containerd[1538]: time="2025-09-05T23:55:23.476230172Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:23.479820 containerd[1538]: time="2025-09-05T23:55:23.479791692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:23.482049 containerd[1538]: time="2025-09-05T23:55:23.481931292Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.0491938s" Sep 5 23:55:23.482049 containerd[1538]: time="2025-09-05T23:55:23.481978692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 23:55:23.482548 containerd[1538]: time="2025-09-05T23:55:23.482422012Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 23:55:24.471050 containerd[1538]: time="2025-09-05T23:55:24.470911132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:24.471721 containerd[1538]: time="2025-09-05T23:55:24.471658772Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 5 23:55:24.473900 containerd[1538]: time="2025-09-05T23:55:24.473854692Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:24.476490 containerd[1538]: time="2025-09-05T23:55:24.476453412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:24.477855 containerd[1538]: time="2025-09-05T23:55:24.477723532Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 995.2714ms" Sep 5 23:55:24.477855 containerd[1538]: time="2025-09-05T23:55:24.477762412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 23:55:24.478337 containerd[1538]: time="2025-09-05T23:55:24.478288452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 23:55:24.625069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:55:24.636122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:24.736880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:24.740667 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:55:24.818503 kubelet[1980]: E0905 23:55:24.818443 1980 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:55:24.821528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:55:24.821710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:55:25.680378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115827954.mount: Deactivated successfully. Sep 5 23:55:26.028783 containerd[1538]: time="2025-09-05T23:55:26.028669572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:26.030107 containerd[1538]: time="2025-09-05T23:55:26.030076612Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 5 23:55:26.031035 containerd[1538]: time="2025-09-05T23:55:26.031003012Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:26.034255 containerd[1538]: time="2025-09-05T23:55:26.034215412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:26.034872 containerd[1538]: time="2025-09-05T23:55:26.034846092Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.5565182s" Sep 5 23:55:26.034903 containerd[1538]: time="2025-09-05T23:55:26.034877932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 23:55:26.035459 containerd[1538]: time="2025-09-05T23:55:26.035284932Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:55:26.648176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141130161.mount: Deactivated successfully. Sep 5 23:55:27.274950 containerd[1538]: time="2025-09-05T23:55:27.274894812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:27.275462 containerd[1538]: time="2025-09-05T23:55:27.275429132Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 5 23:55:27.276362 containerd[1538]: time="2025-09-05T23:55:27.276323452Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:27.280047 containerd[1538]: time="2025-09-05T23:55:27.280011052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:27.281941 containerd[1538]: time="2025-09-05T23:55:27.281892972Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.2465764s" Sep 5 23:55:27.281996 containerd[1538]: time="2025-09-05T23:55:27.281941972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:55:27.282637 containerd[1538]: time="2025-09-05T23:55:27.282598252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:55:27.720934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471018201.mount: Deactivated successfully. Sep 5 23:55:27.727030 containerd[1538]: time="2025-09-05T23:55:27.726779092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:27.727644 containerd[1538]: time="2025-09-05T23:55:27.727594892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 23:55:27.728549 containerd[1538]: time="2025-09-05T23:55:27.728514372Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:27.730677 containerd[1538]: time="2025-09-05T23:55:27.730636812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:27.731488 containerd[1538]: time="2025-09-05T23:55:27.731441292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 448.80276ms" Sep 5 23:55:27.731488 containerd[1538]: time="2025-09-05T23:55:27.731477932Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:55:27.731890 containerd[1538]: time="2025-09-05T23:55:27.731852652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 23:55:28.303539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044492609.mount: Deactivated successfully. Sep 5 23:55:29.766073 containerd[1538]: time="2025-09-05T23:55:29.766008252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:29.767536 containerd[1538]: time="2025-09-05T23:55:29.767447332Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 5 23:55:29.768295 containerd[1538]: time="2025-09-05T23:55:29.768232412Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:29.771939 containerd[1538]: time="2025-09-05T23:55:29.771878652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:29.773231 containerd[1538]: time="2025-09-05T23:55:29.773204172Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.04132148s" Sep 5 23:55:29.773414 containerd[1538]: time="2025-09-05T23:55:29.773308252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 23:55:34.248526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:34.258213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:34.280343 systemd[1]: Reloading requested from client PID 2141 ('systemctl') (unit session-7.scope)... Sep 5 23:55:34.280360 systemd[1]: Reloading... Sep 5 23:55:34.353127 zram_generator::config[2182]: No configuration found. Sep 5 23:55:34.456304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:55:34.511048 systemd[1]: Reloading finished in 230 ms. Sep 5 23:55:34.544083 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 23:55:34.544160 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 23:55:34.544437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:34.545990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:34.654114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:34.659167 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:55:34.693845 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:55:34.693845 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:55:34.693845 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:55:34.694211 kubelet[2237]: I0905 23:55:34.693893 2237 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:55:35.582068 kubelet[2237]: I0905 23:55:35.582030 2237 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:55:35.582068 kubelet[2237]: I0905 23:55:35.582062 2237 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:55:35.582326 kubelet[2237]: I0905 23:55:35.582310 2237 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:55:35.605934 kubelet[2237]: I0905 23:55:35.605782 2237 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:55:35.606089 kubelet[2237]: E0905 23:55:35.605952 2237 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:35.612809 kubelet[2237]: E0905 23:55:35.612766 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:55:35.612809 kubelet[2237]: I0905 23:55:35.612806 2237 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:55:35.616314 kubelet[2237]: I0905 23:55:35.616289 2237 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:55:35.617272 kubelet[2237]: I0905 23:55:35.617239 2237 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:55:35.617431 kubelet[2237]: I0905 23:55:35.617405 2237 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:55:35.617585 kubelet[2237]: I0905 23:55:35.617432 2237 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:55:35.617723 kubelet[2237]: I0905 23:55:35.617712 2237 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:55:35.617723 kubelet[2237]: I0905 23:55:35.617723 2237 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:55:35.617971 kubelet[2237]: I0905 23:55:35.617945 2237 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:55:35.620064 kubelet[2237]: I0905 23:55:35.620033 2237 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:55:35.620111 kubelet[2237]: I0905 23:55:35.620066 2237 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:55:35.620111 kubelet[2237]: I0905 23:55:35.620094 2237 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:55:35.620870 kubelet[2237]: I0905 23:55:35.620173 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:55:35.625287 kubelet[2237]: I0905 23:55:35.624828 2237 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:55:35.625287 kubelet[2237]: W0905 23:55:35.625107 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Sep 5 23:55:35.625287 kubelet[2237]: E0905 23:55:35.625228 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:35.625414 kubelet[2237]: W0905 23:55:35.625288 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Sep 5 23:55:35.625414 kubelet[2237]: E0905 23:55:35.625335 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:35.625645 kubelet[2237]: I0905 23:55:35.625631 2237 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:55:35.625861 kubelet[2237]: W0905 23:55:35.625841 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:55:35.627485 kubelet[2237]: I0905 23:55:35.627468 2237 server.go:1274] "Started kubelet" Sep 5 23:55:35.628244 kubelet[2237]: I0905 23:55:35.628207 2237 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:55:35.631684 kubelet[2237]: I0905 23:55:35.629816 2237 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:55:35.631684 kubelet[2237]: I0905 23:55:35.630592 2237 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:55:35.631684 kubelet[2237]: I0905 23:55:35.631009 2237 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:55:35.631684 kubelet[2237]: I0905 23:55:35.631584 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:55:35.632327 kubelet[2237]: I0905 23:55:35.632301 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:55:35.632433 kubelet[2237]: I0905 23:55:35.632418 2237 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:55:35.632667 kubelet[2237]: I0905 23:55:35.632646 2237 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:55:35.632741 kubelet[2237]: I0905 23:55:35.632727 2237 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:55:35.633084 kubelet[2237]: E0905 23:55:35.633061 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:35.633952 kubelet[2237]: W0905 23:55:35.633687 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Sep 5 23:55:35.633952 kubelet[2237]: E0905 23:55:35.633739 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:35.633952 kubelet[2237]: E0905 23:55:35.633791 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Sep 5 23:55:35.635043 kubelet[2237]: E0905 23:55:35.634101 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862882df709f6fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 23:55:35.627446012 +0000 UTC m=+0.964969961,LastTimestamp:2025-09-05 23:55:35.627446012 +0000 UTC m=+0.964969961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 23:55:35.635389 kubelet[2237]: E0905 23:55:35.635345 2237 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:55:35.635685 kubelet[2237]: I0905 23:55:35.635667 2237 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:55:35.635685 kubelet[2237]: I0905 23:55:35.635683 2237 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:55:35.635791 kubelet[2237]: I0905 23:55:35.635748 2237 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:55:35.646107 kubelet[2237]: I0905 23:55:35.645988 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:55:35.647023 kubelet[2237]: I0905 23:55:35.646992 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:55:35.647023 kubelet[2237]: I0905 23:55:35.647018 2237 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:55:35.647117 kubelet[2237]: I0905 23:55:35.647037 2237 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:55:35.647117 kubelet[2237]: E0905 23:55:35.647081 2237 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:55:35.653269 kubelet[2237]: W0905 23:55:35.653245 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Sep 5 23:55:35.653350 kubelet[2237]: E0905 23:55:35.653290 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:35.653376 kubelet[2237]: I0905 23:55:35.653367 2237 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:55:35.653398 kubelet[2237]: I0905 23:55:35.653376 2237 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:55:35.653398 kubelet[2237]: I0905 23:55:35.653393 2237 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:55:35.654919 kubelet[2237]: I0905 23:55:35.654881 2237 policy_none.go:49] "None policy: Start" Sep 5 23:55:35.655400 kubelet[2237]: I0905 23:55:35.655389 2237 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:55:35.655444 kubelet[2237]: I0905 23:55:35.655409 2237 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:55:35.663374 kubelet[2237]: I0905 23:55:35.663337 2237 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:55:35.663564 kubelet[2237]: I0905 23:55:35.663547 2237 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:55:35.663627 kubelet[2237]: I0905 23:55:35.663565 2237 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:55:35.664228 kubelet[2237]: I0905 23:55:35.664208 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:55:35.665387 kubelet[2237]: E0905 23:55:35.665366 2237 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 23:55:35.766035 kubelet[2237]: I0905 23:55:35.766003 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:55:35.766554 kubelet[2237]: E0905 23:55:35.766514 2237 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Sep 5 23:55:35.834413 kubelet[2237]: E0905 23:55:35.834308 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Sep 5 23:55:35.934803 kubelet[2237]: I0905 23:55:35.934645 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:35.934803 kubelet[2237]: I0905 23:55:35.934692 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db1ccdff92369fd3799e1612a2d83dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7db1ccdff92369fd3799e1612a2d83dd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:55:35.934803 kubelet[2237]: I0905 23:55:35.934718 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:35.934803 kubelet[2237]: I0905 23:55:35.934745 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:35.934803 kubelet[2237]: I0905 23:55:35.934762 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:35.935037 kubelet[2237]: I0905 23:55:35.934778 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:35.935037 kubelet[2237]: I0905 23:55:35.934821 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 23:55:35.935037 kubelet[2237]: I0905 23:55:35.934856 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db1ccdff92369fd3799e1612a2d83dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db1ccdff92369fd3799e1612a2d83dd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:55:35.935037 kubelet[2237]: I0905 23:55:35.934874 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db1ccdff92369fd3799e1612a2d83dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db1ccdff92369fd3799e1612a2d83dd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:55:35.967840 kubelet[2237]: I0905 23:55:35.967807 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:55:35.968193 kubelet[2237]: E0905 23:55:35.968150 2237 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Sep 5 23:55:36.052814 kubelet[2237]: E0905 23:55:36.052775 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:36.053625 containerd[1538]: time="2025-09-05T23:55:36.053415132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:36.053938 kubelet[2237]: E0905 23:55:36.053689 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:36.053995 containerd[1538]: time="2025-09-05T23:55:36.053958252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7db1ccdff92369fd3799e1612a2d83dd,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:36.055175 kubelet[2237]: E0905 23:55:36.055144 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:36.055571 containerd[1538]: time="2025-09-05T23:55:36.055428292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:36.235213 kubelet[2237]: E0905 23:55:36.235077 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Sep 5 23:55:36.369542 kubelet[2237]: I0905 23:55:36.369461 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:55:36.369791 kubelet[2237]: E0905 23:55:36.369768 2237 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Sep 5 23:55:36.572173 kubelet[2237]: W0905 23:55:36.572028 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Sep 5 23:55:36.572173 kubelet[2237]: E0905 23:55:36.572102 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:36.671589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285623905.mount: Deactivated successfully. Sep 5 23:55:36.676951 containerd[1538]: time="2025-09-05T23:55:36.676895972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:36.679185 containerd[1538]: time="2025-09-05T23:55:36.679136412Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:36.679858 containerd[1538]: time="2025-09-05T23:55:36.679819572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 5 23:55:36.680356 containerd[1538]: time="2025-09-05T23:55:36.680325652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:55:36.681213 containerd[1538]: time="2025-09-05T23:55:36.681178852Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:36.682438 containerd[1538]: time="2025-09-05T23:55:36.682401812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:55:36.683043 containerd[1538]: time="2025-09-05T23:55:36.683009972Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:36.686971 containerd[1538]: time="2025-09-05T23:55:36.686912212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:36.687902 containerd[1538]: time="2025-09-05T23:55:36.687863292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 633.8346ms" Sep 5 23:55:36.690887 containerd[1538]: time="2025-09-05T23:55:36.690849772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 635.3666ms" Sep 5 23:55:36.691857 containerd[1538]: time="2025-09-05T23:55:36.691792052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 638.2916ms" Sep 5 23:55:36.729976 kubelet[2237]: W0905 23:55:36.729894 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Sep 5 23:55:36.730069 kubelet[2237]: E0905 23:55:36.729987 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:36.802988 containerd[1538]: time="2025-09-05T23:55:36.802871692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:36.803199 containerd[1538]: time="2025-09-05T23:55:36.803017412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:36.803199 containerd[1538]: time="2025-09-05T23:55:36.803060692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:36.803444 containerd[1538]: time="2025-09-05T23:55:36.803361052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:36.803533 containerd[1538]: time="2025-09-05T23:55:36.803463732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:36.803533 containerd[1538]: time="2025-09-05T23:55:36.803522652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:36.803598 containerd[1538]: time="2025-09-05T23:55:36.803541412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:36.803672 containerd[1538]: time="2025-09-05T23:55:36.803630492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:36.803758 containerd[1538]: time="2025-09-05T23:55:36.803653212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:36.804177 containerd[1538]: time="2025-09-05T23:55:36.804060772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:36.804177 containerd[1538]: time="2025-09-05T23:55:36.804031452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:36.804328 containerd[1538]: time="2025-09-05T23:55:36.804208292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:36.862017 containerd[1538]: time="2025-09-05T23:55:36.861299172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7db1ccdff92369fd3799e1612a2d83dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2157a024e7f78d87f36d6215d79ce00c560590112da2ddf11e31e4e76c8efb34\"" Sep 5 23:55:36.864314 containerd[1538]: time="2025-09-05T23:55:36.864090532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee30ffba0cad08aba3939ec944ea20880c6a751b1871cd408d48441fddfca17d\"" Sep 5 23:55:36.864403 kubelet[2237]: E0905 23:55:36.864215 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:36.865924 kubelet[2237]: E0905 23:55:36.865727 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:36.867553 containerd[1538]: time="2025-09-05T23:55:36.867522172Z" level=info msg="CreateContainer within sandbox \"ee30ffba0cad08aba3939ec944ea20880c6a751b1871cd408d48441fddfca17d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:55:36.867857 containerd[1538]: time="2025-09-05T23:55:36.867833332Z" level=info msg="CreateContainer within sandbox \"2157a024e7f78d87f36d6215d79ce00c560590112da2ddf11e31e4e76c8efb34\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:55:36.868525 containerd[1538]: time="2025-09-05T23:55:36.868497532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1faea41b47ac253331bc37c8f964c81b38a2afe37a4260dd4a7939d24b7e79e\"" Sep 5 23:55:36.869704 kubelet[2237]: E0905 23:55:36.869475 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:36.871810 containerd[1538]: time="2025-09-05T23:55:36.871778132Z" level=info msg="CreateContainer within sandbox \"f1faea41b47ac253331bc37c8f964c81b38a2afe37a4260dd4a7939d24b7e79e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:55:36.878553 kubelet[2237]: W0905 23:55:36.878490 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Sep 5 23:55:36.878696 kubelet[2237]: E0905 23:55:36.878665 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:36.882060 containerd[1538]: time="2025-09-05T23:55:36.882025092Z" level=info msg="CreateContainer within sandbox \"ee30ffba0cad08aba3939ec944ea20880c6a751b1871cd408d48441fddfca17d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"527419271c00fbec9c5e847c71026745b3c0ebeb6c6dabbdcf2aadbe573a04f0\"" Sep 5 23:55:36.882981 containerd[1538]: time="2025-09-05T23:55:36.882929212Z" level=info msg="StartContainer for \"527419271c00fbec9c5e847c71026745b3c0ebeb6c6dabbdcf2aadbe573a04f0\"" Sep 5 23:55:36.886626 containerd[1538]: time="2025-09-05T23:55:36.886549772Z" level=info msg="CreateContainer within sandbox \"2157a024e7f78d87f36d6215d79ce00c560590112da2ddf11e31e4e76c8efb34\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e58c6ae00320c4adc9c3a6587b76378e475a83ba418ee0aa55713533866c080\"" Sep 5 23:55:36.888382 containerd[1538]: time="2025-09-05T23:55:36.886919332Z" level=info msg="CreateContainer within sandbox \"f1faea41b47ac253331bc37c8f964c81b38a2afe37a4260dd4a7939d24b7e79e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"96f66b68b84e29de8221a8a7bb79188a5b146febaaf48295ba32bfb256781e8e\"" Sep 5 23:55:36.888789 containerd[1538]: time="2025-09-05T23:55:36.888755052Z" level=info msg="StartContainer for \"96f66b68b84e29de8221a8a7bb79188a5b146febaaf48295ba32bfb256781e8e\"" Sep 5 23:55:36.888932 containerd[1538]: time="2025-09-05T23:55:36.888755212Z" level=info msg="StartContainer for \"8e58c6ae00320c4adc9c3a6587b76378e475a83ba418ee0aa55713533866c080\"" Sep 5 23:55:36.949222 containerd[1538]: time="2025-09-05T23:55:36.949161052Z" level=info msg="StartContainer for \"8e58c6ae00320c4adc9c3a6587b76378e475a83ba418ee0aa55713533866c080\" returns successfully" Sep 5 23:55:36.949326 containerd[1538]: time="2025-09-05T23:55:36.949174452Z" level=info msg="StartContainer for \"527419271c00fbec9c5e847c71026745b3c0ebeb6c6dabbdcf2aadbe573a04f0\" returns successfully" Sep 5 23:55:36.949422 containerd[1538]: time="2025-09-05T23:55:36.949178132Z" level=info msg="StartContainer for \"96f66b68b84e29de8221a8a7bb79188a5b146febaaf48295ba32bfb256781e8e\" returns successfully" Sep 5 23:55:37.172100 kubelet[2237]: I0905 23:55:37.171541 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:55:37.662458 kubelet[2237]: E0905 23:55:37.662336 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:37.666287 kubelet[2237]: E0905 23:55:37.665366 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:37.672327 kubelet[2237]: E0905 23:55:37.672305 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:38.346013 kubelet[2237]: E0905 23:55:38.345550 2237 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 23:55:38.369727 kubelet[2237]: I0905 23:55:38.369513 2237 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 23:55:38.369727 kubelet[2237]: E0905 23:55:38.369564 2237 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 23:55:38.398038 kubelet[2237]: E0905 23:55:38.397995 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:38.498284 kubelet[2237]: E0905 23:55:38.498226 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:38.599217 kubelet[2237]: E0905 23:55:38.598893 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:38.674910 kubelet[2237]: E0905 23:55:38.674598 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:38.699347 kubelet[2237]: E0905 23:55:38.699306 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:38.799954 kubelet[2237]: E0905 23:55:38.799907 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:38.900768 kubelet[2237]: E0905 23:55:38.900661 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.001191 kubelet[2237]: E0905 23:55:39.001145 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.101885 kubelet[2237]: E0905 23:55:39.101837 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.202734 kubelet[2237]: E0905 23:55:39.202631 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.303134 kubelet[2237]: E0905 23:55:39.303089 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.337076 kubelet[2237]: E0905 23:55:39.336950 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:39.403943 kubelet[2237]: E0905 23:55:39.403909 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.505478 kubelet[2237]: E0905 23:55:39.505374 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.606235 kubelet[2237]: E0905 23:55:39.606188 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:39.706674 kubelet[2237]: E0905 23:55:39.706619 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:40.599090 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-7.scope)... Sep 5 23:55:40.599108 systemd[1]: Reloading... Sep 5 23:55:40.626163 kubelet[2237]: I0905 23:55:40.626072 2237 apiserver.go:52] "Watching apiserver" Sep 5 23:55:40.634404 kubelet[2237]: I0905 23:55:40.633614 2237 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:55:40.669014 zram_generator::config[2558]: No configuration found. Sep 5 23:55:40.765746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:55:40.825190 systemd[1]: Reloading finished in 225 ms. Sep 5 23:55:40.854727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:40.878953 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:55:40.879448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:40.890261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:40.985731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:40.989683 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:55:41.032692 kubelet[2607]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:55:41.032692 kubelet[2607]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:55:41.032692 kubelet[2607]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:55:41.033079 kubelet[2607]: I0905 23:55:41.032727 2607 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:55:41.038992 kubelet[2607]: I0905 23:55:41.037795 2607 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:55:41.038992 kubelet[2607]: I0905 23:55:41.037824 2607 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:55:41.038992 kubelet[2607]: I0905 23:55:41.038070 2607 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:55:41.039441 kubelet[2607]: I0905 23:55:41.039425 2607 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:55:41.041696 kubelet[2607]: I0905 23:55:41.041324 2607 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:55:41.046208 kubelet[2607]: E0905 23:55:41.046173 2607 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:55:41.046208 kubelet[2607]: I0905 23:55:41.046207 2607 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:55:41.048495 kubelet[2607]: I0905 23:55:41.048470 2607 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:55:41.048781 kubelet[2607]: I0905 23:55:41.048761 2607 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:55:41.048887 kubelet[2607]: I0905 23:55:41.048855 2607 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:55:41.049065 kubelet[2607]: I0905 23:55:41.048883 2607 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:55:41.049137 kubelet[2607]: I0905 23:55:41.049079 2607 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:55:41.049137 kubelet[2607]: I0905 23:55:41.049089 2607 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:55:41.049137 kubelet[2607]: I0905 23:55:41.049124 2607 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:55:41.049231 kubelet[2607]: I0905 23:55:41.049218 2607 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:55:41.049264 kubelet[2607]: I0905 23:55:41.049237 2607 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:55:41.049671 kubelet[2607]: I0905 23:55:41.049656 2607 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:55:41.049698 kubelet[2607]: I0905 23:55:41.049678 2607 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:55:41.050813 kubelet[2607]: I0905 23:55:41.050768 2607 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:55:41.051422 kubelet[2607]: I0905 23:55:41.051387 2607 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:55:41.052070 kubelet[2607]: I0905 23:55:41.051861 2607 server.go:1274] "Started kubelet" Sep 5 23:55:41.054976 kubelet[2607]: I0905 23:55:41.052625 2607 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:55:41.057443 kubelet[2607]: I0905 23:55:41.057420 2607 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:55:41.057537 kubelet[2607]: I0905 23:55:41.054693 2607 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:55:41.057758 kubelet[2607]: I0905 23:55:41.052804 2607 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:55:41.059146 kubelet[2607]: I0905 23:55:41.059115 2607 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:55:41.060030 kubelet[2607]: I0905 23:55:41.054789 2607 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:55:41.069701 kubelet[2607]: I0905 23:55:41.069271 2607 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:55:41.071111 kubelet[2607]: E0905 23:55:41.070424 2607 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:55:41.071416 kubelet[2607]: I0905 23:55:41.071267 2607 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:55:41.073516 kubelet[2607]: E0905 23:55:41.071587 2607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:55:41.073516 kubelet[2607]: I0905 23:55:41.072656 2607 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:55:41.082207 kubelet[2607]: I0905 23:55:41.082162 2607 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:55:41.083004 kubelet[2607]: I0905 23:55:41.082939 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:55:41.083355 kubelet[2607]: I0905 23:55:41.083332 2607 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:55:41.083355 kubelet[2607]: I0905 23:55:41.083349 2607 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:55:41.084650 kubelet[2607]: I0905 23:55:41.084341 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:55:41.084650 kubelet[2607]: I0905 23:55:41.084366 2607 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:55:41.084650 kubelet[2607]: I0905 23:55:41.084385 2607 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:55:41.084650 kubelet[2607]: E0905 23:55:41.084427 2607 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:55:41.126699 kubelet[2607]: I0905 23:55:41.126607 2607 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:55:41.126699 kubelet[2607]: I0905 23:55:41.126627 2607 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:55:41.126699 kubelet[2607]: I0905 23:55:41.126649 2607 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:55:41.126814 kubelet[2607]: I0905 23:55:41.126791 2607 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:55:41.126892 kubelet[2607]: I0905 23:55:41.126802 2607 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:55:41.126892 kubelet[2607]: I0905 23:55:41.126830 2607 policy_none.go:49] "None policy: Start" Sep 5 23:55:41.127519 kubelet[2607]: I0905 23:55:41.127499 2607 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:55:41.127576 kubelet[2607]: I0905 23:55:41.127524 2607 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:55:41.127672 kubelet[2607]: I0905 23:55:41.127659 2607 state_mem.go:75] "Updated machine memory state" Sep 5 23:55:41.129501 kubelet[2607]: I0905 23:55:41.128809 2607 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:55:41.129501 kubelet[2607]: I0905 23:55:41.129112 2607 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:55:41.129501 kubelet[2607]: I0905 23:55:41.129125 2607 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:55:41.129501 kubelet[2607]: I0905 23:55:41.129478 2607 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:55:41.234320 kubelet[2607]: I0905 23:55:41.234288 2607 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:55:41.242607 kubelet[2607]: I0905 23:55:41.242569 2607 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 23:55:41.242915 kubelet[2607]: I0905 23:55:41.242903 2607 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 23:55:41.274499 kubelet[2607]: I0905 23:55:41.274462 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db1ccdff92369fd3799e1612a2d83dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db1ccdff92369fd3799e1612a2d83dd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:55:41.274499 kubelet[2607]: I0905 23:55:41.274501 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:41.274499 kubelet[2607]: I0905 23:55:41.274539 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:41.274499 kubelet[2607]: I0905 23:55:41.274561 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:41.274751 kubelet[2607]: I0905 23:55:41.274581 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db1ccdff92369fd3799e1612a2d83dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db1ccdff92369fd3799e1612a2d83dd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:55:41.274751 kubelet[2607]: I0905 23:55:41.274630 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db1ccdff92369fd3799e1612a2d83dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7db1ccdff92369fd3799e1612a2d83dd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:55:41.274751 kubelet[2607]: I0905 23:55:41.274645 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:41.274751 kubelet[2607]: I0905 23:55:41.274660 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:55:41.274751 kubelet[2607]: I0905 23:55:41.274700 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 23:55:41.493004 kubelet[2607]: E0905 23:55:41.492925 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:41.493139 kubelet[2607]: E0905 23:55:41.493118 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:41.493139 kubelet[2607]: E0905 23:55:41.493130 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:42.050507 kubelet[2607]: I0905 23:55:42.050461 2607 apiserver.go:52] "Watching apiserver" Sep 5 23:55:42.071561 kubelet[2607]: I0905 23:55:42.071526 2607 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:55:42.100412 kubelet[2607]: E0905 23:55:42.099855 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:42.101324 kubelet[2607]: E0905 23:55:42.101299 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:42.106933 kubelet[2607]: E0905 23:55:42.105831 2607 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 23:55:42.109987 kubelet[2607]: E0905 23:55:42.107209 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:42.144543 kubelet[2607]: I0905 23:55:42.144468 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.144449372 podStartE2EDuration="1.144449372s" podCreationTimestamp="2025-09-05 23:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:55:42.143212052 +0000 UTC m=+1.150451361" watchObservedRunningTime="2025-09-05 23:55:42.144449372 +0000 UTC m=+1.151688641" Sep 5 23:55:42.144681 kubelet[2607]: I0905 23:55:42.144600 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.144596172 podStartE2EDuration="1.144596172s" podCreationTimestamp="2025-09-05 23:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:55:42.126687132 +0000 UTC m=+1.133926441" watchObservedRunningTime="2025-09-05 23:55:42.144596172 +0000 UTC m=+1.151835481" Sep 5 23:55:42.150984 kubelet[2607]: I0905 23:55:42.150873 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.150857532 podStartE2EDuration="1.150857532s" podCreationTimestamp="2025-09-05 23:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:55:42.150838212 +0000 UTC m=+1.158077521" watchObservedRunningTime="2025-09-05 23:55:42.150857532 +0000 UTC m=+1.158096841" Sep 5 23:55:43.101648 kubelet[2607]: E0905 23:55:43.101496 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:43.101648 kubelet[2607]: E0905 23:55:43.101587 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:45.154926 kubelet[2607]: I0905 23:55:45.154891 2607 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:55:45.155335 containerd[1538]: time="2025-09-05T23:55:45.155203394Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:55:45.155529 kubelet[2607]: I0905 23:55:45.155379 2607 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:55:46.105298 kubelet[2607]: I0905 23:55:46.105246 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f0cff83-9c9d-47c3-8510-12f3989f7e7c-xtables-lock\") pod \"kube-proxy-gsqm6\" (UID: \"2f0cff83-9c9d-47c3-8510-12f3989f7e7c\") " pod="kube-system/kube-proxy-gsqm6" Sep 5 23:55:46.105298 kubelet[2607]: I0905 23:55:46.105299 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txj2\" (UniqueName: \"kubernetes.io/projected/2f0cff83-9c9d-47c3-8510-12f3989f7e7c-kube-api-access-8txj2\") pod \"kube-proxy-gsqm6\" (UID: \"2f0cff83-9c9d-47c3-8510-12f3989f7e7c\") " pod="kube-system/kube-proxy-gsqm6" Sep 5 23:55:46.105432 kubelet[2607]: I0905 23:55:46.105324 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f0cff83-9c9d-47c3-8510-12f3989f7e7c-kube-proxy\") pod \"kube-proxy-gsqm6\" (UID: \"2f0cff83-9c9d-47c3-8510-12f3989f7e7c\") " pod="kube-system/kube-proxy-gsqm6" Sep 5 23:55:46.105432 kubelet[2607]: I0905 23:55:46.105341 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f0cff83-9c9d-47c3-8510-12f3989f7e7c-lib-modules\") pod \"kube-proxy-gsqm6\" (UID: \"2f0cff83-9c9d-47c3-8510-12f3989f7e7c\") " pod="kube-system/kube-proxy-gsqm6" Sep 5 23:55:46.396605 kubelet[2607]: E0905 23:55:46.396429 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:46.397842 containerd[1538]: time="2025-09-05T23:55:46.397782639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsqm6,Uid:2f0cff83-9c9d-47c3-8510-12f3989f7e7c,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:46.442869 containerd[1538]: time="2025-09-05T23:55:46.442614448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:46.442869 containerd[1538]: time="2025-09-05T23:55:46.442688408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:46.442869 containerd[1538]: time="2025-09-05T23:55:46.442704048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:46.442869 containerd[1538]: time="2025-09-05T23:55:46.442787769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:46.474366 containerd[1538]: time="2025-09-05T23:55:46.474324932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsqm6,Uid:2f0cff83-9c9d-47c3-8510-12f3989f7e7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d26ee42014d5178c6f35cddde13c6f9917c502a1b43d2e70dd30e25b259a8bca\"" Sep 5 23:55:46.475589 kubelet[2607]: E0905 23:55:46.475124 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:46.477246 containerd[1538]: time="2025-09-05T23:55:46.477202151Z" level=info msg="CreateContainer within sandbox \"d26ee42014d5178c6f35cddde13c6f9917c502a1b43d2e70dd30e25b259a8bca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:55:46.493949 containerd[1538]: time="2025-09-05T23:55:46.493904858Z" level=info msg="CreateContainer within sandbox \"d26ee42014d5178c6f35cddde13c6f9917c502a1b43d2e70dd30e25b259a8bca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"784362630dbd2201136b0939bd308d0867512fa174a592278ca5de9267f4b19d\"" Sep 5 23:55:46.494497 containerd[1538]: time="2025-09-05T23:55:46.494420141Z" level=info msg="StartContainer for \"784362630dbd2201136b0939bd308d0867512fa174a592278ca5de9267f4b19d\"" Sep 5 23:55:46.509251 kubelet[2607]: I0905 23:55:46.509133 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26vh\" (UniqueName: \"kubernetes.io/projected/b4aa68f2-1e0e-4087-9039-6bb662e81b31-kube-api-access-z26vh\") pod \"tigera-operator-58fc44c59b-bskqr\" (UID: \"b4aa68f2-1e0e-4087-9039-6bb662e81b31\") " pod="tigera-operator/tigera-operator-58fc44c59b-bskqr" Sep 5 23:55:46.509251 kubelet[2607]: I0905 23:55:46.509174 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4aa68f2-1e0e-4087-9039-6bb662e81b31-var-lib-calico\") pod \"tigera-operator-58fc44c59b-bskqr\" (UID: \"b4aa68f2-1e0e-4087-9039-6bb662e81b31\") " pod="tigera-operator/tigera-operator-58fc44c59b-bskqr" Sep 5 23:55:46.550921 containerd[1538]: time="2025-09-05T23:55:46.550885105Z" level=info msg="StartContainer for \"784362630dbd2201136b0939bd308d0867512fa174a592278ca5de9267f4b19d\" returns successfully" Sep 5 23:55:46.719660 containerd[1538]: time="2025-09-05T23:55:46.719501071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-bskqr,Uid:b4aa68f2-1e0e-4087-9039-6bb662e81b31,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:55:46.739787 containerd[1538]: time="2025-09-05T23:55:46.739693841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:46.739787 containerd[1538]: time="2025-09-05T23:55:46.739741121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:46.739787 containerd[1538]: time="2025-09-05T23:55:46.739751921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:46.740059 containerd[1538]: time="2025-09-05T23:55:46.739830882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:46.782839 containerd[1538]: time="2025-09-05T23:55:46.782740838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-bskqr,Uid:b4aa68f2-1e0e-4087-9039-6bb662e81b31,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2a2c633cf9d5e2822f896c5561585385623c9bacb3b22807b5ccc0145219058d\"" Sep 5 23:55:46.786571 containerd[1538]: time="2025-09-05T23:55:46.786469542Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:55:47.113533 kubelet[2607]: E0905 23:55:47.111108 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:47.225499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30510621.mount: Deactivated successfully. Sep 5 23:55:47.682258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1057972430.mount: Deactivated successfully. Sep 5 23:55:47.704561 kubelet[2607]: E0905 23:55:47.704459 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:47.728737 kubelet[2607]: I0905 23:55:47.728680 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gsqm6" podStartSLOduration=1.728663957 podStartE2EDuration="1.728663957s" podCreationTimestamp="2025-09-05 23:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:55:47.123467143 +0000 UTC m=+6.130706452" watchObservedRunningTime="2025-09-05 23:55:47.728663957 +0000 UTC m=+6.735903266" Sep 5 23:55:48.116229 kubelet[2607]: E0905 23:55:48.116110 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:48.230015 containerd[1538]: time="2025-09-05T23:55:48.229741616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:48.238397 containerd[1538]: time="2025-09-05T23:55:48.238355465Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:55:48.239833 containerd[1538]: time="2025-09-05T23:55:48.239774633Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:48.242925 containerd[1538]: time="2025-09-05T23:55:48.242877010Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:48.243611 containerd[1538]: time="2025-09-05T23:55:48.243570614Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.456998671s" Sep 5 23:55:48.243611 containerd[1538]: time="2025-09-05T23:55:48.243606135Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:55:48.264861 containerd[1538]: time="2025-09-05T23:55:48.264825175Z" level=info msg="CreateContainer within sandbox \"2a2c633cf9d5e2822f896c5561585385623c9bacb3b22807b5ccc0145219058d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:55:48.283566 containerd[1538]: time="2025-09-05T23:55:48.283518001Z" level=info msg="CreateContainer within sandbox \"2a2c633cf9d5e2822f896c5561585385623c9bacb3b22807b5ccc0145219058d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a459741436233dc90931f08d580014d88da038059e000a740cb327bf29f0f895\"" Sep 5 23:55:48.284106 containerd[1538]: time="2025-09-05T23:55:48.284082684Z" level=info msg="StartContainer for \"a459741436233dc90931f08d580014d88da038059e000a740cb327bf29f0f895\"" Sep 5 23:55:48.338570 containerd[1538]: time="2025-09-05T23:55:48.338531592Z" level=info msg="StartContainer for \"a459741436233dc90931f08d580014d88da038059e000a740cb327bf29f0f895\" returns successfully" Sep 5 23:55:48.545263 kubelet[2607]: E0905 23:55:48.545226 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:49.119861 kubelet[2607]: E0905 23:55:49.119818 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:53.077877 kubelet[2607]: E0905 23:55:53.077824 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:53.100696 kubelet[2607]: I0905 23:55:53.100506 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-bskqr" podStartSLOduration=5.6378229619999995 podStartE2EDuration="7.100491667s" podCreationTimestamp="2025-09-05 23:55:46 +0000 UTC" firstStartedPulling="2025-09-05 23:55:46.784829652 +0000 UTC m=+5.792068961" lastFinishedPulling="2025-09-05 23:55:48.247498357 +0000 UTC m=+7.254737666" observedRunningTime="2025-09-05 23:55:49.144287422 +0000 UTC m=+8.151526731" watchObservedRunningTime="2025-09-05 23:55:53.100491667 +0000 UTC m=+12.107730936" Sep 5 23:55:53.702643 sudo[1737]: pam_unix(sudo:session): session closed for user root Sep 5 23:55:53.705547 sshd[1730]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:53.711329 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:55:53.712036 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:50186.service: Deactivated successfully. Sep 5 23:55:53.713681 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:55:53.715496 systemd-logind[1510]: Removed session 7. Sep 5 23:55:57.702056 update_engine[1514]: I20250905 23:55:57.701988 1514 update_attempter.cc:509] Updating boot flags... Sep 5 23:55:57.745147 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3015) Sep 5 23:55:57.779432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3013) Sep 5 23:55:58.087355 kubelet[2607]: I0905 23:55:58.087227 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563e0b94-a38e-453e-a09a-dedd19ba5e8d-tigera-ca-bundle\") pod \"calico-typha-6c76b6d88d-l7lg8\" (UID: \"563e0b94-a38e-453e-a09a-dedd19ba5e8d\") " pod="calico-system/calico-typha-6c76b6d88d-l7lg8" Sep 5 23:55:58.087355 kubelet[2607]: I0905 23:55:58.087286 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/563e0b94-a38e-453e-a09a-dedd19ba5e8d-typha-certs\") pod \"calico-typha-6c76b6d88d-l7lg8\" (UID: \"563e0b94-a38e-453e-a09a-dedd19ba5e8d\") " pod="calico-system/calico-typha-6c76b6d88d-l7lg8" Sep 5 23:55:58.087355 kubelet[2607]: I0905 23:55:58.087307 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqjm9\" (UniqueName: \"kubernetes.io/projected/563e0b94-a38e-453e-a09a-dedd19ba5e8d-kube-api-access-lqjm9\") pod \"calico-typha-6c76b6d88d-l7lg8\" (UID: \"563e0b94-a38e-453e-a09a-dedd19ba5e8d\") " pod="calico-system/calico-typha-6c76b6d88d-l7lg8" Sep 5 23:55:58.266381 kubelet[2607]: E0905 23:55:58.266328 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:58.268477 containerd[1538]: time="2025-09-05T23:55:58.268438582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c76b6d88d-l7lg8,Uid:563e0b94-a38e-453e-a09a-dedd19ba5e8d,Namespace:calico-system,Attempt:0,}" Sep 5 23:55:58.330414 containerd[1538]: time="2025-09-05T23:55:58.330296646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:58.330414 containerd[1538]: time="2025-09-05T23:55:58.330351766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:58.330414 containerd[1538]: time="2025-09-05T23:55:58.330362526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:58.330575 containerd[1538]: time="2025-09-05T23:55:58.330452846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:58.379377 containerd[1538]: time="2025-09-05T23:55:58.379269151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c76b6d88d-l7lg8,Uid:563e0b94-a38e-453e-a09a-dedd19ba5e8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"53c8ab06f715cddea3dd8056df24f41a80c81183fdd551d39b237dff4f70cfff\"" Sep 5 23:55:58.380862 kubelet[2607]: E0905 23:55:58.380437 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:55:58.381898 containerd[1538]: time="2025-09-05T23:55:58.381863999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:55:58.389149 kubelet[2607]: I0905 23:55:58.389118 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-lib-modules\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389215 kubelet[2607]: I0905 23:55:58.389156 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-var-lib-calico\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389215 kubelet[2607]: I0905 23:55:58.389177 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-tigera-ca-bundle\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389215 kubelet[2607]: I0905 23:55:58.389193 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-cni-bin-dir\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389215 kubelet[2607]: I0905 23:55:58.389211 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-cni-log-dir\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389378 kubelet[2607]: I0905 23:55:58.389318 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-cni-net-dir\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389406 kubelet[2607]: I0905 23:55:58.389391 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-var-run-calico\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389429 kubelet[2607]: I0905 23:55:58.389412 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26lj\" (UniqueName: \"kubernetes.io/projected/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-kube-api-access-n26lj\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389453 kubelet[2607]: I0905 23:55:58.389436 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-flexvol-driver-host\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389475 kubelet[2607]: I0905 23:55:58.389452 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-policysync\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389475 kubelet[2607]: I0905 23:55:58.389467 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-xtables-lock\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.389518 kubelet[2607]: I0905 23:55:58.389484 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b-node-certs\") pod \"calico-node-zmvts\" (UID: \"c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b\") " pod="calico-system/calico-node-zmvts" Sep 5 23:55:58.463123 kubelet[2607]: E0905 23:55:58.462094 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2czxt" podUID="3040e6c7-ae59-4353-9f35-08b7bd7a921f" Sep 5 23:55:58.500502 kubelet[2607]: E0905 23:55:58.500468 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.500502 kubelet[2607]: W0905 23:55:58.500494 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.500643 kubelet[2607]: E0905 23:55:58.500519 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.501985 kubelet[2607]: E0905 23:55:58.500738 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.501985 kubelet[2607]: W0905 23:55:58.500751 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.501985 kubelet[2607]: E0905 23:55:58.500768 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.501985 kubelet[2607]: E0905 23:55:58.501080 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.501985 kubelet[2607]: W0905 23:55:58.501092 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.501985 kubelet[2607]: E0905 23:55:58.501102 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.501985 kubelet[2607]: E0905 23:55:58.501307 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.501985 kubelet[2607]: W0905 23:55:58.501316 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.501985 kubelet[2607]: E0905 23:55:58.501325 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.501985 kubelet[2607]: E0905 23:55:58.501597 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.502257 kubelet[2607]: W0905 23:55:58.501605 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.502257 kubelet[2607]: E0905 23:55:58.501625 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.502257 kubelet[2607]: E0905 23:55:58.501773 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.502257 kubelet[2607]: W0905 23:55:58.501781 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.502257 kubelet[2607]: E0905 23:55:58.501788 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.502257 kubelet[2607]: E0905 23:55:58.501921 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.502257 kubelet[2607]: W0905 23:55:58.501928 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.502257 kubelet[2607]: E0905 23:55:58.501935 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.502257 kubelet[2607]: E0905 23:55:58.502090 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.502257 kubelet[2607]: W0905 23:55:58.502097 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.502446 kubelet[2607]: E0905 23:55:58.502104 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.502446 kubelet[2607]: E0905 23:55:58.502224 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.502446 kubelet[2607]: W0905 23:55:58.502231 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.502446 kubelet[2607]: E0905 23:55:58.502247 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.502446 kubelet[2607]: E0905 23:55:58.502377 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.502446 kubelet[2607]: W0905 23:55:58.502387 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.502446 kubelet[2607]: E0905 23:55:58.502394 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.506690 kubelet[2607]: E0905 23:55:58.506215 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.506690 kubelet[2607]: W0905 23:55:58.506243 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.506690 kubelet[2607]: E0905 23:55:58.506257 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.507572 kubelet[2607]: E0905 23:55:58.507536 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.508030 kubelet[2607]: W0905 23:55:58.507935 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.508030 kubelet[2607]: E0905 23:55:58.508006 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.508911 kubelet[2607]: E0905 23:55:58.508427 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.508911 kubelet[2607]: W0905 23:55:58.508456 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.508911 kubelet[2607]: E0905 23:55:58.508471 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.509193 kubelet[2607]: E0905 23:55:58.508941 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.509193 kubelet[2607]: W0905 23:55:58.508953 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.509193 kubelet[2607]: E0905 23:55:58.508991 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.512501 kubelet[2607]: E0905 23:55:58.512479 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.512501 kubelet[2607]: W0905 23:55:58.512496 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.512501 kubelet[2607]: E0905 23:55:58.512512 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.513718 kubelet[2607]: E0905 23:55:58.513694 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.513718 kubelet[2607]: W0905 23:55:58.513714 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.513928 kubelet[2607]: E0905 23:55:58.513911 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.517304 kubelet[2607]: E0905 23:55:58.517153 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.517304 kubelet[2607]: W0905 23:55:58.517170 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.517304 kubelet[2607]: E0905 23:55:58.517187 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.518737 kubelet[2607]: E0905 23:55:58.518715 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.518849 kubelet[2607]: W0905 23:55:58.518834 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.518916 kubelet[2607]: E0905 23:55:58.518901 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.519200 kubelet[2607]: E0905 23:55:58.519186 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.519321 kubelet[2607]: W0905 23:55:58.519307 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.519375 kubelet[2607]: E0905 23:55:58.519363 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.520387 containerd[1538]: time="2025-09-05T23:55:58.520042769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zmvts,Uid:c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b,Namespace:calico-system,Attempt:0,}" Sep 5 23:55:58.520628 kubelet[2607]: E0905 23:55:58.520608 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.520770 kubelet[2607]: W0905 23:55:58.520706 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.520885 kubelet[2607]: E0905 23:55:58.520838 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.521302 kubelet[2607]: E0905 23:55:58.521218 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.521302 kubelet[2607]: W0905 23:55:58.521232 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.521302 kubelet[2607]: E0905 23:55:58.521258 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.521826 kubelet[2607]: E0905 23:55:58.521673 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.521826 kubelet[2607]: W0905 23:55:58.521687 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.521826 kubelet[2607]: E0905 23:55:58.521699 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.522549 kubelet[2607]: E0905 23:55:58.522461 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.522549 kubelet[2607]: W0905 23:55:58.522476 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.522549 kubelet[2607]: E0905 23:55:58.522488 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.549461 containerd[1538]: time="2025-09-05T23:55:58.549367056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:58.549632 containerd[1538]: time="2025-09-05T23:55:58.549428376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:58.549632 containerd[1538]: time="2025-09-05T23:55:58.549469456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:58.550165 containerd[1538]: time="2025-09-05T23:55:58.550082098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:58.592674 kubelet[2607]: E0905 23:55:58.592647 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.594623 kubelet[2607]: W0905 23:55:58.594016 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.594623 kubelet[2607]: E0905 23:55:58.594055 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.594623 kubelet[2607]: I0905 23:55:58.594095 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3040e6c7-ae59-4353-9f35-08b7bd7a921f-kubelet-dir\") pod \"csi-node-driver-2czxt\" (UID: \"3040e6c7-ae59-4353-9f35-08b7bd7a921f\") " pod="calico-system/csi-node-driver-2czxt" Sep 5 23:55:58.595487 kubelet[2607]: E0905 23:55:58.595363 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.595487 kubelet[2607]: W0905 23:55:58.595381 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.595487 kubelet[2607]: E0905 23:55:58.595405 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.595487 kubelet[2607]: I0905 23:55:58.595428 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3040e6c7-ae59-4353-9f35-08b7bd7a921f-socket-dir\") pod \"csi-node-driver-2czxt\" (UID: \"3040e6c7-ae59-4353-9f35-08b7bd7a921f\") " pod="calico-system/csi-node-driver-2czxt" Sep 5 23:55:58.596464 kubelet[2607]: E0905 23:55:58.596351 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.596464 kubelet[2607]: W0905 23:55:58.596369 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.596464 kubelet[2607]: E0905 23:55:58.596396 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.597427 kubelet[2607]: E0905 23:55:58.596794 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.597427 kubelet[2607]: W0905 23:55:58.596994 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.597427 kubelet[2607]: E0905 23:55:58.597012 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.597427 kubelet[2607]: I0905 23:55:58.597033 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3040e6c7-ae59-4353-9f35-08b7bd7a921f-varrun\") pod \"csi-node-driver-2czxt\" (UID: \"3040e6c7-ae59-4353-9f35-08b7bd7a921f\") " pod="calico-system/csi-node-driver-2czxt" Sep 5 23:55:58.598193 kubelet[2607]: E0905 23:55:58.598176 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.598249 kubelet[2607]: W0905 23:55:58.598193 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.598249 kubelet[2607]: E0905 23:55:58.598211 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.598452 kubelet[2607]: E0905 23:55:58.598440 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.598488 kubelet[2607]: W0905 23:55:58.598451 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.598488 kubelet[2607]: E0905 23:55:58.598462 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.598644 kubelet[2607]: E0905 23:55:58.598633 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.598675 kubelet[2607]: W0905 23:55:58.598644 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.598675 kubelet[2607]: E0905 23:55:58.598656 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.598722 kubelet[2607]: I0905 23:55:58.598673 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjlkd\" (UniqueName: \"kubernetes.io/projected/3040e6c7-ae59-4353-9f35-08b7bd7a921f-kube-api-access-pjlkd\") pod \"csi-node-driver-2czxt\" (UID: \"3040e6c7-ae59-4353-9f35-08b7bd7a921f\") " pod="calico-system/csi-node-driver-2czxt" Sep 5 23:55:58.598835 kubelet[2607]: E0905 23:55:58.598824 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.598862 kubelet[2607]: W0905 23:55:58.598838 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.598862 kubelet[2607]: E0905 23:55:58.598852 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.599016 kubelet[2607]: E0905 23:55:58.599005 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.599016 kubelet[2607]: W0905 23:55:58.599016 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.599068 kubelet[2607]: E0905 23:55:58.599024 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.599201 kubelet[2607]: E0905 23:55:58.599190 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.599201 kubelet[2607]: W0905 23:55:58.599202 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.599263 kubelet[2607]: E0905 23:55:58.599214 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.599376 kubelet[2607]: E0905 23:55:58.599365 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.599404 kubelet[2607]: W0905 23:55:58.599376 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.599404 kubelet[2607]: E0905 23:55:58.599389 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.600645 kubelet[2607]: E0905 23:55:58.599757 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.600645 kubelet[2607]: W0905 23:55:58.599775 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.600645 kubelet[2607]: E0905 23:55:58.599786 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.600645 kubelet[2607]: E0905 23:55:58.600140 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.600645 kubelet[2607]: W0905 23:55:58.600150 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.600645 kubelet[2607]: E0905 23:55:58.600160 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.600645 kubelet[2607]: I0905 23:55:58.600195 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3040e6c7-ae59-4353-9f35-08b7bd7a921f-registration-dir\") pod \"csi-node-driver-2czxt\" (UID: \"3040e6c7-ae59-4353-9f35-08b7bd7a921f\") " pod="calico-system/csi-node-driver-2czxt" Sep 5 23:55:58.600645 kubelet[2607]: E0905 23:55:58.600423 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.600645 kubelet[2607]: W0905 23:55:58.600433 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.600897 kubelet[2607]: E0905 23:55:58.600442 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.601254 kubelet[2607]: E0905 23:55:58.601234 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.601296 kubelet[2607]: W0905 23:55:58.601254 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.601296 kubelet[2607]: E0905 23:55:58.601270 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.604059 containerd[1538]: time="2025-09-05T23:55:58.604021658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zmvts,Uid:c1f4fba1-d7f3-4b16-aca5-20c5c1723d9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d861dcb005465c659513c27b4bfdc090abb29a03645ff4f00feeb481a138772\"" Sep 5 23:55:58.701039 kubelet[2607]: E0905 23:55:58.701009 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.701315 kubelet[2607]: W0905 23:55:58.701157 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.701315 kubelet[2607]: E0905 23:55:58.701182 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.701480 kubelet[2607]: E0905 23:55:58.701468 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.701591 kubelet[2607]: W0905 23:55:58.701525 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.701591 kubelet[2607]: E0905 23:55:58.701550 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.701813 kubelet[2607]: E0905 23:55:58.701793 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.701848 kubelet[2607]: W0905 23:55:58.701813 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.701848 kubelet[2607]: E0905 23:55:58.701836 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.702067 kubelet[2607]: E0905 23:55:58.702055 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.702067 kubelet[2607]: W0905 23:55:58.702067 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.702144 kubelet[2607]: E0905 23:55:58.702080 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.702694 kubelet[2607]: E0905 23:55:58.702682 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.702694 kubelet[2607]: W0905 23:55:58.702692 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.702772 kubelet[2607]: E0905 23:55:58.702707 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.702946 kubelet[2607]: E0905 23:55:58.702935 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.702946 kubelet[2607]: W0905 23:55:58.702946 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.703083 kubelet[2607]: E0905 23:55:58.703040 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.703260 kubelet[2607]: E0905 23:55:58.703244 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.703260 kubelet[2607]: W0905 23:55:58.703257 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.703404 kubelet[2607]: E0905 23:55:58.703320 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.703450 kubelet[2607]: E0905 23:55:58.703436 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.703450 kubelet[2607]: W0905 23:55:58.703448 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.703507 kubelet[2607]: E0905 23:55:58.703464 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.703637 kubelet[2607]: E0905 23:55:58.703624 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.703662 kubelet[2607]: W0905 23:55:58.703637 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.703662 kubelet[2607]: E0905 23:55:58.703648 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.703996 kubelet[2607]: E0905 23:55:58.703855 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.703996 kubelet[2607]: W0905 23:55:58.703867 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.703996 kubelet[2607]: E0905 23:55:58.703879 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.704078 kubelet[2607]: E0905 23:55:58.704072 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.704099 kubelet[2607]: W0905 23:55:58.704080 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.704099 kubelet[2607]: E0905 23:55:58.704088 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.704264 kubelet[2607]: E0905 23:55:58.704249 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.704264 kubelet[2607]: W0905 23:55:58.704263 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.704315 kubelet[2607]: E0905 23:55:58.704277 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.704460 kubelet[2607]: E0905 23:55:58.704447 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.704505 kubelet[2607]: W0905 23:55:58.704457 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.704505 kubelet[2607]: E0905 23:55:58.704473 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.704899 kubelet[2607]: E0905 23:55:58.704802 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.704899 kubelet[2607]: W0905 23:55:58.704815 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.704899 kubelet[2607]: E0905 23:55:58.704833 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.705658 kubelet[2607]: E0905 23:55:58.705643 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.705726 kubelet[2607]: W0905 23:55:58.705713 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.705871 kubelet[2607]: E0905 23:55:58.705797 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.706284 kubelet[2607]: E0905 23:55:58.706225 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.706284 kubelet[2607]: W0905 23:55:58.706245 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.706344 kubelet[2607]: E0905 23:55:58.706281 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.706704 kubelet[2607]: E0905 23:55:58.706674 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.706704 kubelet[2607]: W0905 23:55:58.706686 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.706924 kubelet[2607]: E0905 23:55:58.706866 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.707041 kubelet[2607]: E0905 23:55:58.707027 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.707091 kubelet[2607]: W0905 23:55:58.707080 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.707219 kubelet[2607]: E0905 23:55:58.707170 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.707358 kubelet[2607]: E0905 23:55:58.707346 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.707448 kubelet[2607]: W0905 23:55:58.707398 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.707448 kubelet[2607]: E0905 23:55:58.707430 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.708556 kubelet[2607]: E0905 23:55:58.708488 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.708556 kubelet[2607]: W0905 23:55:58.708502 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.708556 kubelet[2607]: E0905 23:55:58.708545 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.708921 kubelet[2607]: E0905 23:55:58.708839 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.708921 kubelet[2607]: W0905 23:55:58.708852 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.709110 kubelet[2607]: E0905 23:55:58.709014 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.709246 kubelet[2607]: E0905 23:55:58.709209 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.709246 kubelet[2607]: W0905 23:55:58.709223 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.709448 kubelet[2607]: E0905 23:55:58.709426 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.709556 kubelet[2607]: E0905 23:55:58.709543 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.709742 kubelet[2607]: W0905 23:55:58.709597 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.709742 kubelet[2607]: E0905 23:55:58.709622 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.709943 kubelet[2607]: E0905 23:55:58.709841 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.709943 kubelet[2607]: W0905 23:55:58.709851 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.709943 kubelet[2607]: E0905 23:55:58.709861 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.710194 kubelet[2607]: E0905 23:55:58.710172 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.710194 kubelet[2607]: W0905 23:55:58.710187 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.710313 kubelet[2607]: E0905 23:55:58.710199 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:58.720916 kubelet[2607]: E0905 23:55:58.720898 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:55:58.720916 kubelet[2607]: W0905 23:55:58.720915 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:55:58.721008 kubelet[2607]: E0905 23:55:58.720928 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:55:59.201769 systemd[1]: run-containerd-runc-k8s.io-53c8ab06f715cddea3dd8056df24f41a80c81183fdd551d39b237dff4f70cfff-runc.Cz2Ph5.mount: Deactivated successfully. Sep 5 23:55:59.375371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215528497.mount: Deactivated successfully. Sep 5 23:55:59.912658 containerd[1538]: time="2025-09-05T23:55:59.912594374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:59.914272 containerd[1538]: time="2025-09-05T23:55:59.914216818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 23:55:59.915625 containerd[1538]: time="2025-09-05T23:55:59.915570342Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:59.919260 containerd[1538]: time="2025-09-05T23:55:59.919212192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:59.920179 containerd[1538]: time="2025-09-05T23:55:59.920049834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.538149555s" Sep 5 23:55:59.920179 containerd[1538]: time="2025-09-05T23:55:59.920082514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:55:59.922585 containerd[1538]: time="2025-09-05T23:55:59.922022600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:55:59.941681 containerd[1538]: time="2025-09-05T23:55:59.941630414Z" level=info msg="CreateContainer within sandbox \"53c8ab06f715cddea3dd8056df24f41a80c81183fdd551d39b237dff4f70cfff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:55:59.956692 containerd[1538]: time="2025-09-05T23:55:59.956403136Z" level=info msg="CreateContainer within sandbox \"53c8ab06f715cddea3dd8056df24f41a80c81183fdd551d39b237dff4f70cfff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b627a332175e159ac7aed47423e621a0de4431b6ed8d2045abb0dc3939e985c3\"" Sep 5 23:55:59.956997 containerd[1538]: time="2025-09-05T23:55:59.956952057Z" level=info msg="StartContainer for \"b627a332175e159ac7aed47423e621a0de4431b6ed8d2045abb0dc3939e985c3\"" Sep 5 23:56:00.028563 containerd[1538]: time="2025-09-05T23:56:00.028521171Z" level=info msg="StartContainer for \"b627a332175e159ac7aed47423e621a0de4431b6ed8d2045abb0dc3939e985c3\" returns successfully" Sep 5 23:56:00.086213 kubelet[2607]: E0905 23:56:00.086164 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2czxt" podUID="3040e6c7-ae59-4353-9f35-08b7bd7a921f" Sep 5 23:56:00.181688 kubelet[2607]: E0905 23:56:00.181577 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:00.219907 kubelet[2607]: I0905 23:56:00.219841 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c76b6d88d-l7lg8" podStartSLOduration=1.679525749 podStartE2EDuration="3.219822551s" podCreationTimestamp="2025-09-05 23:55:57 +0000 UTC" firstStartedPulling="2025-09-05 23:55:58.380877356 +0000 UTC m=+17.388116665" lastFinishedPulling="2025-09-05 23:55:59.921174118 +0000 UTC m=+18.928413467" observedRunningTime="2025-09-05 23:56:00.21953047 +0000 UTC m=+19.226769779" watchObservedRunningTime="2025-09-05 23:56:00.219822551 +0000 UTC m=+19.227061860" Sep 5 23:56:00.235206 kubelet[2607]: E0905 23:56:00.235167 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.235421 kubelet[2607]: W0905 23:56:00.235192 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.235473 kubelet[2607]: E0905 23:56:00.235436 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.235984 kubelet[2607]: E0905 23:56:00.235669 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.235984 kubelet[2607]: W0905 23:56:00.235686 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.235984 kubelet[2607]: E0905 23:56:00.235698 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.236126 kubelet[2607]: E0905 23:56:00.236101 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.236126 kubelet[2607]: W0905 23:56:00.236122 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.236181 kubelet[2607]: E0905 23:56:00.236135 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.237987 kubelet[2607]: E0905 23:56:00.236736 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.237987 kubelet[2607]: W0905 23:56:00.236756 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.237987 kubelet[2607]: E0905 23:56:00.236770 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.239078 kubelet[2607]: E0905 23:56:00.239048 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.239078 kubelet[2607]: W0905 23:56:00.239069 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.239078 kubelet[2607]: E0905 23:56:00.239083 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.239486 kubelet[2607]: E0905 23:56:00.239462 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.239486 kubelet[2607]: W0905 23:56:00.239480 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.239556 kubelet[2607]: E0905 23:56:00.239493 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.241230 kubelet[2607]: E0905 23:56:00.241194 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.241230 kubelet[2607]: W0905 23:56:00.241213 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.241230 kubelet[2607]: E0905 23:56:00.241225 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.243343 kubelet[2607]: E0905 23:56:00.241437 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.243343 kubelet[2607]: W0905 23:56:00.241451 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.243343 kubelet[2607]: E0905 23:56:00.241461 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.244072 kubelet[2607]: E0905 23:56:00.244044 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.244072 kubelet[2607]: W0905 23:56:00.244066 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.244156 kubelet[2607]: E0905 23:56:00.244079 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.244382 kubelet[2607]: E0905 23:56:00.244358 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.244382 kubelet[2607]: W0905 23:56:00.244375 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.244445 kubelet[2607]: E0905 23:56:00.244387 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.247078 kubelet[2607]: E0905 23:56:00.247045 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.247078 kubelet[2607]: W0905 23:56:00.247067 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.247078 kubelet[2607]: E0905 23:56:00.247081 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.247378 kubelet[2607]: E0905 23:56:00.247355 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.247378 kubelet[2607]: W0905 23:56:00.247371 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.247447 kubelet[2607]: E0905 23:56:00.247392 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.249114 kubelet[2607]: E0905 23:56:00.249083 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.249114 kubelet[2607]: W0905 23:56:00.249103 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.249114 kubelet[2607]: E0905 23:56:00.249117 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.250360 kubelet[2607]: E0905 23:56:00.250331 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.250360 kubelet[2607]: W0905 23:56:00.250349 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.250360 kubelet[2607]: E0905 23:56:00.250362 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.250788 kubelet[2607]: E0905 23:56:00.250580 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.250788 kubelet[2607]: W0905 23:56:00.250592 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.250788 kubelet[2607]: E0905 23:56:00.250601 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.315076 kubelet[2607]: E0905 23:56:00.315038 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.315076 kubelet[2607]: W0905 23:56:00.315063 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.315076 kubelet[2607]: E0905 23:56:00.315083 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.315363 kubelet[2607]: E0905 23:56:00.315345 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.315363 kubelet[2607]: W0905 23:56:00.315360 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.315423 kubelet[2607]: E0905 23:56:00.315376 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.315613 kubelet[2607]: E0905 23:56:00.315575 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.315613 kubelet[2607]: W0905 23:56:00.315588 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.315613 kubelet[2607]: E0905 23:56:00.315601 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.315784 kubelet[2607]: E0905 23:56:00.315771 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.315809 kubelet[2607]: W0905 23:56:00.315783 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.315809 kubelet[2607]: E0905 23:56:00.315798 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.316024 kubelet[2607]: E0905 23:56:00.316008 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.316024 kubelet[2607]: W0905 23:56:00.316022 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.316088 kubelet[2607]: E0905 23:56:00.316036 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.316195 kubelet[2607]: E0905 23:56:00.316184 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.316195 kubelet[2607]: W0905 23:56:00.316194 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.316255 kubelet[2607]: E0905 23:56:00.316205 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.316419 kubelet[2607]: E0905 23:56:00.316405 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.316419 kubelet[2607]: W0905 23:56:00.316418 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.316535 kubelet[2607]: E0905 23:56:00.316448 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.317014 kubelet[2607]: E0905 23:56:00.316999 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.317014 kubelet[2607]: W0905 23:56:00.317010 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.317105 kubelet[2607]: E0905 23:56:00.317043 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.317189 kubelet[2607]: E0905 23:56:00.317177 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.317189 kubelet[2607]: W0905 23:56:00.317187 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.317249 kubelet[2607]: E0905 23:56:00.317209 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.317350 kubelet[2607]: E0905 23:56:00.317338 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.317350 kubelet[2607]: W0905 23:56:00.317348 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.317396 kubelet[2607]: E0905 23:56:00.317363 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.317526 kubelet[2607]: E0905 23:56:00.317513 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.317526 kubelet[2607]: W0905 23:56:00.317525 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.317577 kubelet[2607]: E0905 23:56:00.317537 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.317712 kubelet[2607]: E0905 23:56:00.317701 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.317736 kubelet[2607]: W0905 23:56:00.317712 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.317736 kubelet[2607]: E0905 23:56:00.317724 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.317887 kubelet[2607]: E0905 23:56:00.317877 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.317911 kubelet[2607]: W0905 23:56:00.317887 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.317911 kubelet[2607]: E0905 23:56:00.317900 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.318189 kubelet[2607]: E0905 23:56:00.318172 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.318222 kubelet[2607]: W0905 23:56:00.318189 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.318222 kubelet[2607]: E0905 23:56:00.318201 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.318436 kubelet[2607]: E0905 23:56:00.318421 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.318436 kubelet[2607]: W0905 23:56:00.318435 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.318481 kubelet[2607]: E0905 23:56:00.318451 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.318665 kubelet[2607]: E0905 23:56:00.318652 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.318665 kubelet[2607]: W0905 23:56:00.318663 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.318722 kubelet[2607]: E0905 23:56:00.318685 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.318934 kubelet[2607]: E0905 23:56:00.318918 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.318957 kubelet[2607]: W0905 23:56:00.318935 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.318957 kubelet[2607]: E0905 23:56:00.318953 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.319173 kubelet[2607]: E0905 23:56:00.319157 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:00.319202 kubelet[2607]: W0905 23:56:00.319173 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:00.319202 kubelet[2607]: E0905 23:56:00.319185 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:00.899740 containerd[1538]: time="2025-09-05T23:56:00.899693084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:00.902019 containerd[1538]: time="2025-09-05T23:56:00.901941130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 23:56:00.902931 containerd[1538]: time="2025-09-05T23:56:00.902883573Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:00.907048 containerd[1538]: time="2025-09-05T23:56:00.906789623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:00.908137 containerd[1538]: time="2025-09-05T23:56:00.908044506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 985.988346ms" Sep 5 23:56:00.908137 containerd[1538]: time="2025-09-05T23:56:00.908081186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:56:00.910872 containerd[1538]: time="2025-09-05T23:56:00.910843554Z" level=info msg="CreateContainer within sandbox \"3d861dcb005465c659513c27b4bfdc090abb29a03645ff4f00feeb481a138772\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:56:00.925980 containerd[1538]: time="2025-09-05T23:56:00.925870313Z" level=info msg="CreateContainer within sandbox \"3d861dcb005465c659513c27b4bfdc090abb29a03645ff4f00feeb481a138772\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"34c6a8e79f9086691e30d49fa78120e305f61f194e7c9caaf30c43b8c482e64a\"" Sep 5 23:56:00.926507 containerd[1538]: time="2025-09-05T23:56:00.926480154Z" level=info msg="StartContainer for \"34c6a8e79f9086691e30d49fa78120e305f61f194e7c9caaf30c43b8c482e64a\"" Sep 5 23:56:00.973570 containerd[1538]: time="2025-09-05T23:56:00.973471237Z" level=info msg="StartContainer for \"34c6a8e79f9086691e30d49fa78120e305f61f194e7c9caaf30c43b8c482e64a\" returns successfully" Sep 5 23:56:01.133783 containerd[1538]: time="2025-09-05T23:56:01.130049104Z" level=info msg="shim disconnected" id=34c6a8e79f9086691e30d49fa78120e305f61f194e7c9caaf30c43b8c482e64a namespace=k8s.io Sep 5 23:56:01.133783 containerd[1538]: time="2025-09-05T23:56:01.133780634Z" level=warning msg="cleaning up after shim disconnected" id=34c6a8e79f9086691e30d49fa78120e305f61f194e7c9caaf30c43b8c482e64a namespace=k8s.io Sep 5 23:56:01.133783 containerd[1538]: time="2025-09-05T23:56:01.133795714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:01.185049 kubelet[2607]: E0905 23:56:01.184676 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:01.194417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34c6a8e79f9086691e30d49fa78120e305f61f194e7c9caaf30c43b8c482e64a-rootfs.mount: Deactivated successfully. Sep 5 23:56:01.202011 containerd[1538]: time="2025-09-05T23:56:01.201077558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:56:02.085182 kubelet[2607]: E0905 23:56:02.084714 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2czxt" podUID="3040e6c7-ae59-4353-9f35-08b7bd7a921f" Sep 5 23:56:02.190598 kubelet[2607]: E0905 23:56:02.190145 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:03.191498 kubelet[2607]: E0905 23:56:03.191165 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:03.355151 containerd[1538]: time="2025-09-05T23:56:03.354585048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:03.355653 containerd[1538]: time="2025-09-05T23:56:03.355591570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:56:03.356878 containerd[1538]: time="2025-09-05T23:56:03.356815013Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:03.359600 containerd[1538]: time="2025-09-05T23:56:03.359544099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:03.360649 containerd[1538]: time="2025-09-05T23:56:03.360611821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.159475343s" Sep 5 23:56:03.360748 containerd[1538]: time="2025-09-05T23:56:03.360652541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:56:03.363599 containerd[1538]: time="2025-09-05T23:56:03.363379987Z" level=info msg="CreateContainer within sandbox \"3d861dcb005465c659513c27b4bfdc090abb29a03645ff4f00feeb481a138772\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:56:03.390804 containerd[1538]: time="2025-09-05T23:56:03.390753846Z" level=info msg="CreateContainer within sandbox \"3d861dcb005465c659513c27b4bfdc090abb29a03645ff4f00feeb481a138772\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f73fcc1bba8f735852353e3579cdbb9b1ebc9edd5be7177cec5629990652d9e6\"" Sep 5 23:56:03.391330 containerd[1538]: time="2025-09-05T23:56:03.391277487Z" level=info msg="StartContainer for \"f73fcc1bba8f735852353e3579cdbb9b1ebc9edd5be7177cec5629990652d9e6\"" Sep 5 23:56:03.447379 containerd[1538]: time="2025-09-05T23:56:03.447293807Z" level=info msg="StartContainer for \"f73fcc1bba8f735852353e3579cdbb9b1ebc9edd5be7177cec5629990652d9e6\" returns successfully" Sep 5 23:56:04.084847 kubelet[2607]: E0905 23:56:04.084754 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2czxt" podUID="3040e6c7-ae59-4353-9f35-08b7bd7a921f" Sep 5 23:56:04.112333 containerd[1538]: time="2025-09-05T23:56:04.112283542Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:56:04.131316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f73fcc1bba8f735852353e3579cdbb9b1ebc9edd5be7177cec5629990652d9e6-rootfs.mount: Deactivated successfully. Sep 5 23:56:04.137131 containerd[1538]: time="2025-09-05T23:56:04.137074552Z" level=info msg="shim disconnected" id=f73fcc1bba8f735852353e3579cdbb9b1ebc9edd5be7177cec5629990652d9e6 namespace=k8s.io Sep 5 23:56:04.137131 containerd[1538]: time="2025-09-05T23:56:04.137131352Z" level=warning msg="cleaning up after shim disconnected" id=f73fcc1bba8f735852353e3579cdbb9b1ebc9edd5be7177cec5629990652d9e6 namespace=k8s.io Sep 5 23:56:04.137266 containerd[1538]: time="2025-09-05T23:56:04.137139912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:04.194384 containerd[1538]: time="2025-09-05T23:56:04.194300547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:56:04.204658 kubelet[2607]: I0905 23:56:04.203052 2607 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 23:56:04.232916 kubelet[2607]: W0905 23:56:04.232874 2607 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 5 23:56:04.239178 kubelet[2607]: E0905 23:56:04.238440 2607 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 5 23:56:04.348307 kubelet[2607]: I0905 23:56:04.348186 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-backend-key-pair\") pod \"whisker-cbcc9bc56-42xlw\" (UID: \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\") " pod="calico-system/whisker-cbcc9bc56-42xlw" Sep 5 23:56:04.348307 kubelet[2607]: I0905 23:56:04.348235 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ww7k\" (UniqueName: \"kubernetes.io/projected/8a5e4a42-edd8-4334-8fc9-cc5dbb79316a-kube-api-access-9ww7k\") pod \"coredns-7c65d6cfc9-q8wpj\" (UID: \"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a\") " pod="kube-system/coredns-7c65d6cfc9-q8wpj" Sep 5 23:56:04.348307 kubelet[2607]: I0905 23:56:04.348257 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvsd\" (UniqueName: \"kubernetes.io/projected/882e113a-3ec9-4622-a5cc-bcbb76b3dde2-kube-api-access-cgvsd\") pod \"calico-apiserver-7847fb757b-mxf96\" (UID: \"882e113a-3ec9-4622-a5cc-bcbb76b3dde2\") " pod="calico-apiserver/calico-apiserver-7847fb757b-mxf96" Sep 5 23:56:04.348307 kubelet[2607]: I0905 23:56:04.348275 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/604f5067-a6d0-4576-889f-1368c927e973-tigera-ca-bundle\") pod \"calico-kube-controllers-55df5c4477-4fndx\" (UID: \"604f5067-a6d0-4576-889f-1368c927e973\") " pod="calico-system/calico-kube-controllers-55df5c4477-4fndx" Sep 5 23:56:04.348481 kubelet[2607]: I0905 23:56:04.348347 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz8dd\" (UniqueName: \"kubernetes.io/projected/604f5067-a6d0-4576-889f-1368c927e973-kube-api-access-sz8dd\") pod \"calico-kube-controllers-55df5c4477-4fndx\" (UID: \"604f5067-a6d0-4576-889f-1368c927e973\") " pod="calico-system/calico-kube-controllers-55df5c4477-4fndx" Sep 5 23:56:04.348481 kubelet[2607]: I0905 23:56:04.348401 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcnd\" (UniqueName: \"kubernetes.io/projected/d96f52bf-df90-4603-8a34-8bdf02a0c13f-kube-api-access-jfcnd\") pod \"goldmane-7988f88666-dnd4d\" (UID: \"d96f52bf-df90-4603-8a34-8bdf02a0c13f\") " pod="calico-system/goldmane-7988f88666-dnd4d" Sep 5 23:56:04.348481 kubelet[2607]: I0905 23:56:04.348473 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r99qt\" (UniqueName: \"kubernetes.io/projected/8c3e2115-205c-4a12-b2ec-acd6067f08c2-kube-api-access-r99qt\") pod \"whisker-cbcc9bc56-42xlw\" (UID: \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\") " pod="calico-system/whisker-cbcc9bc56-42xlw" Sep 5 23:56:04.348553 kubelet[2607]: I0905 23:56:04.348496 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d96f52bf-df90-4603-8a34-8bdf02a0c13f-goldmane-ca-bundle\") pod \"goldmane-7988f88666-dnd4d\" (UID: \"d96f52bf-df90-4603-8a34-8bdf02a0c13f\") " pod="calico-system/goldmane-7988f88666-dnd4d" Sep 5 23:56:04.348553 kubelet[2607]: I0905 23:56:04.348531 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-ca-bundle\") pod \"whisker-cbcc9bc56-42xlw\" (UID: \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\") " pod="calico-system/whisker-cbcc9bc56-42xlw" Sep 5 23:56:04.348630 kubelet[2607]: I0905 23:56:04.348557 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80ac692e-415b-4419-9e3c-f3f01df822d0-config-volume\") pod \"coredns-7c65d6cfc9-xp2hz\" (UID: \"80ac692e-415b-4419-9e3c-f3f01df822d0\") " pod="kube-system/coredns-7c65d6cfc9-xp2hz" Sep 5 23:56:04.348630 kubelet[2607]: I0905 23:56:04.348611 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/882e113a-3ec9-4622-a5cc-bcbb76b3dde2-calico-apiserver-certs\") pod \"calico-apiserver-7847fb757b-mxf96\" (UID: \"882e113a-3ec9-4622-a5cc-bcbb76b3dde2\") " pod="calico-apiserver/calico-apiserver-7847fb757b-mxf96" Sep 5 23:56:04.348674 kubelet[2607]: I0905 23:56:04.348628 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a5e4a42-edd8-4334-8fc9-cc5dbb79316a-config-volume\") pod \"coredns-7c65d6cfc9-q8wpj\" (UID: \"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a\") " pod="kube-system/coredns-7c65d6cfc9-q8wpj" Sep 5 23:56:04.348930 kubelet[2607]: I0905 23:56:04.348894 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d96f52bf-df90-4603-8a34-8bdf02a0c13f-goldmane-key-pair\") pod \"goldmane-7988f88666-dnd4d\" (UID: \"d96f52bf-df90-4603-8a34-8bdf02a0c13f\") " pod="calico-system/goldmane-7988f88666-dnd4d" Sep 5 23:56:04.348977 kubelet[2607]: I0905 23:56:04.348953 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cbedf47-c538-40ff-85f2-c633bdff94ba-calico-apiserver-certs\") pod \"calico-apiserver-7847fb757b-kgbj8\" (UID: \"7cbedf47-c538-40ff-85f2-c633bdff94ba\") " pod="calico-apiserver/calico-apiserver-7847fb757b-kgbj8" Sep 5 23:56:04.349035 kubelet[2607]: I0905 23:56:04.349021 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d96f52bf-df90-4603-8a34-8bdf02a0c13f-config\") pod \"goldmane-7988f88666-dnd4d\" (UID: \"d96f52bf-df90-4603-8a34-8bdf02a0c13f\") " pod="calico-system/goldmane-7988f88666-dnd4d" Sep 5 23:56:04.349065 kubelet[2607]: I0905 23:56:04.349047 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjfs\" (UniqueName: \"kubernetes.io/projected/7cbedf47-c538-40ff-85f2-c633bdff94ba-kube-api-access-dkjfs\") pod \"calico-apiserver-7847fb757b-kgbj8\" (UID: \"7cbedf47-c538-40ff-85f2-c633bdff94ba\") " pod="calico-apiserver/calico-apiserver-7847fb757b-kgbj8" Sep 5 23:56:04.349190 kubelet[2607]: I0905 23:56:04.349158 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhj8\" (UniqueName: \"kubernetes.io/projected/80ac692e-415b-4419-9e3c-f3f01df822d0-kube-api-access-nvhj8\") pod \"coredns-7c65d6cfc9-xp2hz\" (UID: \"80ac692e-415b-4419-9e3c-f3f01df822d0\") " pod="kube-system/coredns-7c65d6cfc9-xp2hz" Sep 5 23:56:04.550701 containerd[1538]: time="2025-09-05T23:56:04.550654425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55df5c4477-4fndx,Uid:604f5067-a6d0-4576-889f-1368c927e973,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:04.550701 containerd[1538]: time="2025-09-05T23:56:04.550692746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-dnd4d,Uid:d96f52bf-df90-4603-8a34-8bdf02a0c13f,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:04.551092 containerd[1538]: time="2025-09-05T23:56:04.550659145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-kgbj8,Uid:7cbedf47-c538-40ff-85f2-c633bdff94ba,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:56:04.551092 containerd[1538]: time="2025-09-05T23:56:04.550709906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-mxf96,Uid:882e113a-3ec9-4622-a5cc-bcbb76b3dde2,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:56:04.551153 containerd[1538]: time="2025-09-05T23:56:04.550672786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbcc9bc56-42xlw,Uid:8c3e2115-205c-4a12-b2ec-acd6067f08c2,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:04.695486 containerd[1538]: time="2025-09-05T23:56:04.695420037Z" level=error msg="Failed to destroy network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.700116 containerd[1538]: time="2025-09-05T23:56:04.700060127Z" level=error msg="encountered an error cleaning up failed sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.700313 containerd[1538]: time="2025-09-05T23:56:04.700133327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-mxf96,Uid:882e113a-3ec9-4622-a5cc-bcbb76b3dde2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.703686 containerd[1538]: time="2025-09-05T23:56:04.703564134Z" level=error msg="Failed to destroy network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.704323 containerd[1538]: time="2025-09-05T23:56:04.704166615Z" level=error msg="encountered an error cleaning up failed sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.704323 containerd[1538]: time="2025-09-05T23:56:04.704233815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-kgbj8,Uid:7cbedf47-c538-40ff-85f2-c633bdff94ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.705496 containerd[1538]: time="2025-09-05T23:56:04.705450777Z" level=error msg="Failed to destroy network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.707215 containerd[1538]: time="2025-09-05T23:56:04.707138981Z" level=error msg="encountered an error cleaning up failed sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.707318 containerd[1538]: time="2025-09-05T23:56:04.707212861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-dnd4d,Uid:d96f52bf-df90-4603-8a34-8bdf02a0c13f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.708998 containerd[1538]: time="2025-09-05T23:56:04.708955785Z" level=error msg="Failed to destroy network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.709441 containerd[1538]: time="2025-09-05T23:56:04.709322985Z" level=error msg="encountered an error cleaning up failed sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.709441 containerd[1538]: time="2025-09-05T23:56:04.709361025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55df5c4477-4fndx,Uid:604f5067-a6d0-4576-889f-1368c927e973,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.712212 containerd[1538]: time="2025-09-05T23:56:04.711778510Z" level=error msg="Failed to destroy network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.712212 containerd[1538]: time="2025-09-05T23:56:04.712118431Z" level=error msg="encountered an error cleaning up failed sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.712212 containerd[1538]: time="2025-09-05T23:56:04.712152351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbcc9bc56-42xlw,Uid:8c3e2115-205c-4a12-b2ec-acd6067f08c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.713618 kubelet[2607]: E0905 23:56:04.713503 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.715326 kubelet[2607]: E0905 23:56:04.714478 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.715326 kubelet[2607]: E0905 23:56:04.714999 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.715326 kubelet[2607]: E0905 23:56:04.715309 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-dnd4d" Sep 5 23:56:04.715441 kubelet[2607]: E0905 23:56:04.715413 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.715469 kubelet[2607]: E0905 23:56:04.715456 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cbcc9bc56-42xlw" Sep 5 23:56:04.715838 kubelet[2607]: E0905 23:56:04.715802 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:04.715868 kubelet[2607]: E0905 23:56:04.715853 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7847fb757b-mxf96" Sep 5 23:56:04.718857 kubelet[2607]: E0905 23:56:04.718824 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55df5c4477-4fndx" Sep 5 23:56:04.718984 kubelet[2607]: E0905 23:56:04.718956 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55df5c4477-4fndx" Sep 5 23:56:04.719088 kubelet[2607]: E0905 23:56:04.719064 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55df5c4477-4fndx_calico-system(604f5067-a6d0-4576-889f-1368c927e973)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55df5c4477-4fndx_calico-system(604f5067-a6d0-4576-889f-1368c927e973)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55df5c4477-4fndx" podUID="604f5067-a6d0-4576-889f-1368c927e973" Sep 5 23:56:04.719281 kubelet[2607]: E0905 23:56:04.719220 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cbcc9bc56-42xlw" Sep 5 23:56:04.719331 kubelet[2607]: E0905 23:56:04.719278 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cbcc9bc56-42xlw_calico-system(8c3e2115-205c-4a12-b2ec-acd6067f08c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cbcc9bc56-42xlw_calico-system(8c3e2115-205c-4a12-b2ec-acd6067f08c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cbcc9bc56-42xlw" podUID="8c3e2115-205c-4a12-b2ec-acd6067f08c2" Sep 5 23:56:04.719393 kubelet[2607]: E0905 23:56:04.715311 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7847fb757b-kgbj8" Sep 5 23:56:04.719421 kubelet[2607]: E0905 23:56:04.719390 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7847fb757b-kgbj8" Sep 5 23:56:04.719446 kubelet[2607]: E0905 23:56:04.719415 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7847fb757b-kgbj8_calico-apiserver(7cbedf47-c538-40ff-85f2-c633bdff94ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7847fb757b-kgbj8_calico-apiserver(7cbedf47-c538-40ff-85f2-c633bdff94ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7847fb757b-kgbj8" podUID="7cbedf47-c538-40ff-85f2-c633bdff94ba" Sep 5 23:56:04.719639 kubelet[2607]: E0905 23:56:04.719608 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7847fb757b-mxf96" Sep 5 23:56:04.719692 kubelet[2607]: E0905 23:56:04.719667 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7847fb757b-mxf96_calico-apiserver(882e113a-3ec9-4622-a5cc-bcbb76b3dde2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7847fb757b-mxf96_calico-apiserver(882e113a-3ec9-4622-a5cc-bcbb76b3dde2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7847fb757b-mxf96" podUID="882e113a-3ec9-4622-a5cc-bcbb76b3dde2" Sep 5 23:56:04.720591 kubelet[2607]: E0905 23:56:04.720561 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-dnd4d" Sep 5 23:56:04.720643 kubelet[2607]: E0905 23:56:04.720623 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-dnd4d_calico-system(d96f52bf-df90-4603-8a34-8bdf02a0c13f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-dnd4d_calico-system(d96f52bf-df90-4603-8a34-8bdf02a0c13f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-dnd4d" podUID="d96f52bf-df90-4603-8a34-8bdf02a0c13f" Sep 5 23:56:05.195492 kubelet[2607]: I0905 23:56:05.195460 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:05.196769 containerd[1538]: time="2025-09-05T23:56:05.196370542Z" level=info msg="StopPodSandbox for \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\"" Sep 5 23:56:05.196769 containerd[1538]: time="2025-09-05T23:56:05.196530063Z" level=info msg="Ensure that sandbox a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45 in task-service has been cleanup successfully" Sep 5 23:56:05.197724 kubelet[2607]: I0905 23:56:05.197697 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:05.198359 containerd[1538]: time="2025-09-05T23:56:05.198315666Z" level=info msg="StopPodSandbox for \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\"" Sep 5 23:56:05.198699 containerd[1538]: time="2025-09-05T23:56:05.198655067Z" level=info msg="Ensure that sandbox 65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772 in task-service has been cleanup successfully" Sep 5 23:56:05.198795 kubelet[2607]: I0905 23:56:05.198768 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:05.199406 containerd[1538]: time="2025-09-05T23:56:05.199326948Z" level=info msg="StopPodSandbox for \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\"" Sep 5 23:56:05.199981 containerd[1538]: time="2025-09-05T23:56:05.199782029Z" level=info msg="Ensure that sandbox d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4 in task-service has been cleanup successfully" Sep 5 23:56:05.201346 kubelet[2607]: I0905 23:56:05.201319 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:05.201778 containerd[1538]: time="2025-09-05T23:56:05.201752392Z" level=info msg="StopPodSandbox for \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\"" Sep 5 23:56:05.202463 containerd[1538]: time="2025-09-05T23:56:05.202255593Z" level=info msg="Ensure that sandbox 16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25 in task-service has been cleanup successfully" Sep 5 23:56:05.204658 kubelet[2607]: I0905 23:56:05.204628 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:05.205621 containerd[1538]: time="2025-09-05T23:56:05.205588360Z" level=info msg="StopPodSandbox for \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\"" Sep 5 23:56:05.205938 containerd[1538]: time="2025-09-05T23:56:05.205915240Z" level=info msg="Ensure that sandbox d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e in task-service has been cleanup successfully" Sep 5 23:56:05.238654 containerd[1538]: time="2025-09-05T23:56:05.238600582Z" level=error msg="StopPodSandbox for \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\" failed" error="failed to destroy network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.238868 kubelet[2607]: E0905 23:56:05.238827 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:05.239179 kubelet[2607]: E0905 23:56:05.238888 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45"} Sep 5 23:56:05.239179 kubelet[2607]: E0905 23:56:05.238947 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"882e113a-3ec9-4622-a5cc-bcbb76b3dde2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:05.239179 kubelet[2607]: E0905 23:56:05.238983 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"882e113a-3ec9-4622-a5cc-bcbb76b3dde2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7847fb757b-mxf96" podUID="882e113a-3ec9-4622-a5cc-bcbb76b3dde2" Sep 5 23:56:05.244396 containerd[1538]: time="2025-09-05T23:56:05.244300473Z" level=error msg="StopPodSandbox for \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\" failed" error="failed to destroy network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.244721 kubelet[2607]: E0905 23:56:05.244673 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:05.244772 kubelet[2607]: E0905 23:56:05.244730 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25"} Sep 5 23:56:05.244772 kubelet[2607]: E0905 23:56:05.244760 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cbedf47-c538-40ff-85f2-c633bdff94ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:05.244849 kubelet[2607]: E0905 23:56:05.244782 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cbedf47-c538-40ff-85f2-c633bdff94ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7847fb757b-kgbj8" podUID="7cbedf47-c538-40ff-85f2-c633bdff94ba" Sep 5 23:56:05.248663 containerd[1538]: time="2025-09-05T23:56:05.248565441Z" level=error msg="StopPodSandbox for \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\" failed" error="failed to destroy network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.248663 containerd[1538]: time="2025-09-05T23:56:05.248610401Z" level=error msg="StopPodSandbox for \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\" failed" error="failed to destroy network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.249006 kubelet[2607]: E0905 23:56:05.248753 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:05.249006 kubelet[2607]: E0905 23:56:05.248796 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e"} Sep 5 23:56:05.249006 kubelet[2607]: E0905 23:56:05.248823 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d96f52bf-df90-4603-8a34-8bdf02a0c13f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:05.249006 kubelet[2607]: E0905 23:56:05.248762 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:05.249006 kubelet[2607]: E0905 23:56:05.248870 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4"} Sep 5 23:56:05.249165 kubelet[2607]: E0905 23:56:05.248897 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:05.249165 kubelet[2607]: E0905 23:56:05.248842 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d96f52bf-df90-4603-8a34-8bdf02a0c13f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-dnd4d" podUID="d96f52bf-df90-4603-8a34-8bdf02a0c13f" Sep 5 23:56:05.249165 kubelet[2607]: E0905 23:56:05.248933 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cbcc9bc56-42xlw" podUID="8c3e2115-205c-4a12-b2ec-acd6067f08c2" Sep 5 23:56:05.251129 containerd[1538]: time="2025-09-05T23:56:05.251096566Z" level=error msg="StopPodSandbox for \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\" failed" error="failed to destroy network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.251386 kubelet[2607]: E0905 23:56:05.251246 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:05.251386 kubelet[2607]: E0905 23:56:05.251280 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772"} Sep 5 23:56:05.251386 kubelet[2607]: E0905 23:56:05.251307 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"604f5067-a6d0-4576-889f-1368c927e973\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:05.251386 kubelet[2607]: E0905 23:56:05.251327 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"604f5067-a6d0-4576-889f-1368c927e973\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55df5c4477-4fndx" podUID="604f5067-a6d0-4576-889f-1368c927e973" Sep 5 23:56:05.431989 kubelet[2607]: E0905 23:56:05.431666 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:05.432248 containerd[1538]: time="2025-09-05T23:56:05.432203348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8wpj,Uid:8a5e4a42-edd8-4334-8fc9-cc5dbb79316a,Namespace:kube-system,Attempt:0,}" Sep 5 23:56:05.437825 kubelet[2607]: E0905 23:56:05.437789 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:05.438473 containerd[1538]: time="2025-09-05T23:56:05.438428280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xp2hz,Uid:80ac692e-415b-4419-9e3c-f3f01df822d0,Namespace:kube-system,Attempt:0,}" Sep 5 23:56:05.461035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e-shm.mount: Deactivated successfully. Sep 5 23:56:05.518008 containerd[1538]: time="2025-09-05T23:56:05.517426309Z" level=error msg="Failed to destroy network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.518404 containerd[1538]: time="2025-09-05T23:56:05.518360751Z" level=error msg="encountered an error cleaning up failed sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.518453 containerd[1538]: time="2025-09-05T23:56:05.518423391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8wpj,Uid:8a5e4a42-edd8-4334-8fc9-cc5dbb79316a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.520032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef-shm.mount: Deactivated successfully. Sep 5 23:56:05.520406 kubelet[2607]: E0905 23:56:05.520369 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.520480 kubelet[2607]: E0905 23:56:05.520431 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q8wpj" Sep 5 23:56:05.520480 kubelet[2607]: E0905 23:56:05.520456 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q8wpj" Sep 5 23:56:05.520621 kubelet[2607]: E0905 23:56:05.520501 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q8wpj_kube-system(8a5e4a42-edd8-4334-8fc9-cc5dbb79316a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q8wpj_kube-system(8a5e4a42-edd8-4334-8fc9-cc5dbb79316a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q8wpj" podUID="8a5e4a42-edd8-4334-8fc9-cc5dbb79316a" Sep 5 23:56:05.523368 containerd[1538]: time="2025-09-05T23:56:05.523272720Z" level=error msg="Failed to destroy network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.524058 containerd[1538]: time="2025-09-05T23:56:05.524024081Z" level=error msg="encountered an error cleaning up failed sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.524121 containerd[1538]: time="2025-09-05T23:56:05.524075361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xp2hz,Uid:80ac692e-415b-4419-9e3c-f3f01df822d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.524282 kubelet[2607]: E0905 23:56:05.524250 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:05.524320 kubelet[2607]: E0905 23:56:05.524295 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xp2hz" Sep 5 23:56:05.524345 kubelet[2607]: E0905 23:56:05.524312 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xp2hz" Sep 5 23:56:05.524384 kubelet[2607]: E0905 23:56:05.524361 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xp2hz_kube-system(80ac692e-415b-4419-9e3c-f3f01df822d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xp2hz_kube-system(80ac692e-415b-4419-9e3c-f3f01df822d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xp2hz" podUID="80ac692e-415b-4419-9e3c-f3f01df822d0" Sep 5 23:56:05.527248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1-shm.mount: Deactivated successfully. Sep 5 23:56:06.087874 containerd[1538]: time="2025-09-05T23:56:06.087826977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2czxt,Uid:3040e6c7-ae59-4353-9f35-08b7bd7a921f,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:06.145378 containerd[1538]: time="2025-09-05T23:56:06.145331678Z" level=error msg="Failed to destroy network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:06.145717 containerd[1538]: time="2025-09-05T23:56:06.145690119Z" level=error msg="encountered an error cleaning up failed sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:06.145780 containerd[1538]: time="2025-09-05T23:56:06.145755319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2czxt,Uid:3040e6c7-ae59-4353-9f35-08b7bd7a921f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:06.145993 kubelet[2607]: E0905 23:56:06.145951 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:06.146071 kubelet[2607]: E0905 23:56:06.146017 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2czxt" Sep 5 23:56:06.146071 kubelet[2607]: E0905 23:56:06.146039 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2czxt" Sep 5 23:56:06.146122 kubelet[2607]: E0905 23:56:06.146079 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2czxt_calico-system(3040e6c7-ae59-4353-9f35-08b7bd7a921f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2czxt_calico-system(3040e6c7-ae59-4353-9f35-08b7bd7a921f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2czxt" podUID="3040e6c7-ae59-4353-9f35-08b7bd7a921f" Sep 5 23:56:06.207091 kubelet[2607]: I0905 23:56:06.207047 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:06.209749 containerd[1538]: time="2025-09-05T23:56:06.209347552Z" level=info msg="StopPodSandbox for \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\"" Sep 5 23:56:06.209749 containerd[1538]: time="2025-09-05T23:56:06.209512072Z" level=info msg="Ensure that sandbox 1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef in task-service has been cleanup successfully" Sep 5 23:56:06.209900 kubelet[2607]: I0905 23:56:06.209413 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:06.210109 containerd[1538]: time="2025-09-05T23:56:06.210084913Z" level=info msg="StopPodSandbox for \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\"" Sep 5 23:56:06.210511 containerd[1538]: time="2025-09-05T23:56:06.210490154Z" level=info msg="Ensure that sandbox 1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c in task-service has been cleanup successfully" Sep 5 23:56:06.212103 kubelet[2607]: I0905 23:56:06.211633 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:06.212179 containerd[1538]: time="2025-09-05T23:56:06.212070397Z" level=info msg="StopPodSandbox for \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\"" Sep 5 23:56:06.212210 containerd[1538]: time="2025-09-05T23:56:06.212200277Z" level=info msg="Ensure that sandbox ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1 in task-service has been cleanup successfully" Sep 5 23:56:06.244293 containerd[1538]: time="2025-09-05T23:56:06.244236654Z" level=error msg="StopPodSandbox for \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\" failed" error="failed to destroy network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:06.244413 containerd[1538]: time="2025-09-05T23:56:06.244267934Z" level=error msg="StopPodSandbox for \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\" failed" error="failed to destroy network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:06.245005 kubelet[2607]: E0905 23:56:06.244971 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:06.245320 kubelet[2607]: E0905 23:56:06.245018 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef"} Sep 5 23:56:06.245320 kubelet[2607]: E0905 23:56:06.245050 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:06.245320 kubelet[2607]: E0905 23:56:06.245071 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q8wpj" podUID="8a5e4a42-edd8-4334-8fc9-cc5dbb79316a" Sep 5 23:56:06.245320 kubelet[2607]: E0905 23:56:06.245114 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:06.245320 kubelet[2607]: E0905 23:56:06.245131 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1"} Sep 5 23:56:06.245474 kubelet[2607]: E0905 23:56:06.245147 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80ac692e-415b-4419-9e3c-f3f01df822d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:06.245474 kubelet[2607]: E0905 23:56:06.245163 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80ac692e-415b-4419-9e3c-f3f01df822d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xp2hz" podUID="80ac692e-415b-4419-9e3c-f3f01df822d0" Sep 5 23:56:06.251779 containerd[1538]: time="2025-09-05T23:56:06.251681267Z" level=error msg="StopPodSandbox for \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\" failed" error="failed to destroy network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:06.251998 kubelet[2607]: E0905 23:56:06.251898 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:06.251998 kubelet[2607]: E0905 23:56:06.251981 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c"} Sep 5 23:56:06.252076 kubelet[2607]: E0905 23:56:06.252012 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3040e6c7-ae59-4353-9f35-08b7bd7a921f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:06.252076 kubelet[2607]: E0905 23:56:06.252037 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3040e6c7-ae59-4353-9f35-08b7bd7a921f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2czxt" podUID="3040e6c7-ae59-4353-9f35-08b7bd7a921f" Sep 5 23:56:06.458383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c-shm.mount: Deactivated successfully. Sep 5 23:56:08.035185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019836645.mount: Deactivated successfully. Sep 5 23:56:08.220094 containerd[1538]: time="2025-09-05T23:56:08.220049436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:08.220946 containerd[1538]: time="2025-09-05T23:56:08.220647917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:56:08.221559 containerd[1538]: time="2025-09-05T23:56:08.221516078Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:08.223317 containerd[1538]: time="2025-09-05T23:56:08.223281161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:08.223821 containerd[1538]: time="2025-09-05T23:56:08.223778642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.029417535s" Sep 5 23:56:08.223867 containerd[1538]: time="2025-09-05T23:56:08.223819202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:56:08.235775 containerd[1538]: time="2025-09-05T23:56:08.235741540Z" level=info msg="CreateContainer within sandbox \"3d861dcb005465c659513c27b4bfdc090abb29a03645ff4f00feeb481a138772\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:56:08.248340 containerd[1538]: time="2025-09-05T23:56:08.248227920Z" level=info msg="CreateContainer within sandbox \"3d861dcb005465c659513c27b4bfdc090abb29a03645ff4f00feeb481a138772\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b3a0fa6542adf2323323e2d0a27ac2142d854185dbcc298a7e2f68aa04a93b1a\"" Sep 5 23:56:08.248672 containerd[1538]: time="2025-09-05T23:56:08.248645320Z" level=info msg="StartContainer for \"b3a0fa6542adf2323323e2d0a27ac2142d854185dbcc298a7e2f68aa04a93b1a\"" Sep 5 23:56:08.333412 containerd[1538]: time="2025-09-05T23:56:08.333254732Z" level=info msg="StartContainer for \"b3a0fa6542adf2323323e2d0a27ac2142d854185dbcc298a7e2f68aa04a93b1a\" returns successfully" Sep 5 23:56:08.448108 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:56:08.448221 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:56:08.555007 containerd[1538]: time="2025-09-05T23:56:08.554676277Z" level=info msg="StopPodSandbox for \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\"" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.648 [INFO][3890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.648 [INFO][3890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" iface="eth0" netns="/var/run/netns/cni-bb98c698-c9b9-4249-9a74-8a3a333cbefe" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.649 [INFO][3890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" iface="eth0" netns="/var/run/netns/cni-bb98c698-c9b9-4249-9a74-8a3a333cbefe" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.650 [INFO][3890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" iface="eth0" netns="/var/run/netns/cni-bb98c698-c9b9-4249-9a74-8a3a333cbefe" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.650 [INFO][3890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.650 [INFO][3890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.708 [INFO][3901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.708 [INFO][3901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.708 [INFO][3901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.718 [WARNING][3901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.718 [INFO][3901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.719 [INFO][3901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:08.722988 containerd[1538]: 2025-09-05 23:56:08.721 [INFO][3890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:08.723448 containerd[1538]: time="2025-09-05T23:56:08.723102739Z" level=info msg="TearDown network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\" successfully" Sep 5 23:56:08.723448 containerd[1538]: time="2025-09-05T23:56:08.723137299Z" level=info msg="StopPodSandbox for \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\" returns successfully" Sep 5 23:56:08.882356 kubelet[2607]: I0905 23:56:08.882313 2607 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r99qt\" (UniqueName: \"kubernetes.io/projected/8c3e2115-205c-4a12-b2ec-acd6067f08c2-kube-api-access-r99qt\") pod \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\" (UID: \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\") " Sep 5 23:56:08.882356 kubelet[2607]: I0905 23:56:08.882361 2607 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-backend-key-pair\") pod \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\" (UID: \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\") " Sep 5 23:56:08.882883 kubelet[2607]: I0905 23:56:08.882386 2607 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-ca-bundle\") pod \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\" (UID: \"8c3e2115-205c-4a12-b2ec-acd6067f08c2\") " Sep 5 23:56:08.893103 kubelet[2607]: I0905 23:56:08.893055 2607 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3e2115-205c-4a12-b2ec-acd6067f08c2-kube-api-access-r99qt" (OuterVolumeSpecName: "kube-api-access-r99qt") pod "8c3e2115-205c-4a12-b2ec-acd6067f08c2" (UID: "8c3e2115-205c-4a12-b2ec-acd6067f08c2"). InnerVolumeSpecName "kube-api-access-r99qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 23:56:08.893197 kubelet[2607]: I0905 23:56:08.893154 2607 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8c3e2115-205c-4a12-b2ec-acd6067f08c2" (UID: "8c3e2115-205c-4a12-b2ec-acd6067f08c2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 23:56:08.900522 kubelet[2607]: I0905 23:56:08.900465 2607 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8c3e2115-205c-4a12-b2ec-acd6067f08c2" (UID: "8c3e2115-205c-4a12-b2ec-acd6067f08c2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 23:56:08.983115 kubelet[2607]: I0905 23:56:08.982989 2607 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r99qt\" (UniqueName: \"kubernetes.io/projected/8c3e2115-205c-4a12-b2ec-acd6067f08c2-kube-api-access-r99qt\") on node \"localhost\" DevicePath \"\"" Sep 5 23:56:08.983115 kubelet[2607]: I0905 23:56:08.983028 2607 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 23:56:08.983115 kubelet[2607]: I0905 23:56:08.983040 2607 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c3e2115-205c-4a12-b2ec-acd6067f08c2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 23:56:09.039637 systemd[1]: run-netns-cni\x2dbb98c698\x2dc9b9\x2d4249\x2d9a74\x2d8a3a333cbefe.mount: Deactivated successfully. Sep 5 23:56:09.039797 systemd[1]: var-lib-kubelet-pods-8c3e2115\x2d205c\x2d4a12\x2db2ec\x2dacd6067f08c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr99qt.mount: Deactivated successfully. Sep 5 23:56:09.039892 systemd[1]: var-lib-kubelet-pods-8c3e2115\x2d205c\x2d4a12\x2db2ec\x2dacd6067f08c2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:56:09.238888 kubelet[2607]: I0905 23:56:09.238726 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zmvts" podStartSLOduration=1.6203726999999999 podStartE2EDuration="11.238708679s" podCreationTimestamp="2025-09-05 23:55:58 +0000 UTC" firstStartedPulling="2025-09-05 23:55:58.606116784 +0000 UTC m=+17.613356093" lastFinishedPulling="2025-09-05 23:56:08.224452763 +0000 UTC m=+27.231692072" observedRunningTime="2025-09-05 23:56:09.238693799 +0000 UTC m=+28.245933068" watchObservedRunningTime="2025-09-05 23:56:09.238708679 +0000 UTC m=+28.245947988" Sep 5 23:56:09.385514 kubelet[2607]: I0905 23:56:09.385450 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd7e9a3c-ea48-4149-b0bf-1258fc59307b-whisker-ca-bundle\") pod \"whisker-67cbb9977f-7kvkn\" (UID: \"fd7e9a3c-ea48-4149-b0bf-1258fc59307b\") " pod="calico-system/whisker-67cbb9977f-7kvkn" Sep 5 23:56:09.385514 kubelet[2607]: I0905 23:56:09.385505 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd7e9a3c-ea48-4149-b0bf-1258fc59307b-whisker-backend-key-pair\") pod \"whisker-67cbb9977f-7kvkn\" (UID: \"fd7e9a3c-ea48-4149-b0bf-1258fc59307b\") " pod="calico-system/whisker-67cbb9977f-7kvkn" Sep 5 23:56:09.385690 kubelet[2607]: I0905 23:56:09.385529 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5b5b\" (UniqueName: \"kubernetes.io/projected/fd7e9a3c-ea48-4149-b0bf-1258fc59307b-kube-api-access-k5b5b\") pod \"whisker-67cbb9977f-7kvkn\" (UID: \"fd7e9a3c-ea48-4149-b0bf-1258fc59307b\") " pod="calico-system/whisker-67cbb9977f-7kvkn" Sep 5 23:56:09.596120 containerd[1538]: time="2025-09-05T23:56:09.595995520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67cbb9977f-7kvkn,Uid:fd7e9a3c-ea48-4149-b0bf-1258fc59307b,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:09.739691 systemd-networkd[1231]: calie26b734ff6e: Link UP Sep 5 23:56:09.740379 systemd-networkd[1231]: calie26b734ff6e: Gained carrier Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.651 [INFO][3924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.670 [INFO][3924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--67cbb9977f--7kvkn-eth0 whisker-67cbb9977f- calico-system fd7e9a3c-ea48-4149-b0bf-1258fc59307b 880 0 2025-09-05 23:56:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67cbb9977f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-67cbb9977f-7kvkn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie26b734ff6e [] [] }} ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.670 [INFO][3924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.694 [INFO][3938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" HandleID="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Workload="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.694 [INFO][3938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" HandleID="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Workload="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-67cbb9977f-7kvkn", "timestamp":"2025-09-05 23:56:09.694453464 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.694 [INFO][3938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.694 [INFO][3938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.694 [INFO][3938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.705 [INFO][3938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.710 [INFO][3938] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.715 [INFO][3938] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.717 [INFO][3938] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.719 [INFO][3938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.719 [INFO][3938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.720 [INFO][3938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.725 [INFO][3938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.730 [INFO][3938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.730 [INFO][3938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" host="localhost" Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.730 [INFO][3938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:09.752176 containerd[1538]: 2025-09-05 23:56:09.730 [INFO][3938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" HandleID="k8s-pod-network.5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Workload="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" Sep 5 23:56:09.752717 containerd[1538]: 2025-09-05 23:56:09.732 [INFO][3924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67cbb9977f--7kvkn-eth0", GenerateName:"whisker-67cbb9977f-", Namespace:"calico-system", SelfLink:"", UID:"fd7e9a3c-ea48-4149-b0bf-1258fc59307b", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67cbb9977f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-67cbb9977f-7kvkn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie26b734ff6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:09.752717 containerd[1538]: 2025-09-05 23:56:09.732 [INFO][3924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" Sep 5 23:56:09.752717 containerd[1538]: 2025-09-05 23:56:09.732 [INFO][3924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie26b734ff6e ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" Sep 5 23:56:09.752717 containerd[1538]: 2025-09-05 23:56:09.740 [INFO][3924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" Sep 5 23:56:09.752717 containerd[1538]: 2025-09-05 23:56:09.741 [INFO][3924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67cbb9977f--7kvkn-eth0", GenerateName:"whisker-67cbb9977f-", Namespace:"calico-system", SelfLink:"", UID:"fd7e9a3c-ea48-4149-b0bf-1258fc59307b", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67cbb9977f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce", Pod:"whisker-67cbb9977f-7kvkn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie26b734ff6e", MAC:"0a:c4:d1:52:7d:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:09.752717 containerd[1538]: 2025-09-05 23:56:09.749 [INFO][3924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce" Namespace="calico-system" Pod="whisker-67cbb9977f-7kvkn" WorkloadEndpoint="localhost-k8s-whisker--67cbb9977f--7kvkn-eth0" Sep 5 23:56:09.780820 containerd[1538]: time="2025-09-05T23:56:09.780734550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:09.780820 containerd[1538]: time="2025-09-05T23:56:09.780783070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:09.781030 containerd[1538]: time="2025-09-05T23:56:09.780794430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:09.781105 containerd[1538]: time="2025-09-05T23:56:09.780912910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:09.803086 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:09.821580 containerd[1538]: time="2025-09-05T23:56:09.821528289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67cbb9977f-7kvkn,Uid:fd7e9a3c-ea48-4149-b0bf-1258fc59307b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce\"" Sep 5 23:56:09.823296 containerd[1538]: time="2025-09-05T23:56:09.823262332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:56:10.070009 kernel: bpftool[4120]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:56:10.229379 systemd-networkd[1231]: vxlan.calico: Link UP Sep 5 23:56:10.229388 systemd-networkd[1231]: vxlan.calico: Gained carrier Sep 5 23:56:10.836445 containerd[1538]: time="2025-09-05T23:56:10.836371774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:10.837089 containerd[1538]: time="2025-09-05T23:56:10.837058895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:56:10.837956 containerd[1538]: time="2025-09-05T23:56:10.837919737Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:10.840371 containerd[1538]: time="2025-09-05T23:56:10.840338100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:10.840828 containerd[1538]: time="2025-09-05T23:56:10.840788700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.017490888s" Sep 5 23:56:10.840866 containerd[1538]: time="2025-09-05T23:56:10.840828341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:56:10.844169 containerd[1538]: time="2025-09-05T23:56:10.844115265Z" level=info msg="CreateContainer within sandbox \"5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:56:10.860942 containerd[1538]: time="2025-09-05T23:56:10.860894888Z" level=info msg="CreateContainer within sandbox \"5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"385b08bba531abe669e7426790425d7527f478954308576abcfab364bb9c33e2\"" Sep 5 23:56:10.862224 containerd[1538]: time="2025-09-05T23:56:10.862156410Z" level=info msg="StartContainer for \"385b08bba531abe669e7426790425d7527f478954308576abcfab364bb9c33e2\"" Sep 5 23:56:10.917391 containerd[1538]: time="2025-09-05T23:56:10.917348925Z" level=info msg="StartContainer for \"385b08bba531abe669e7426790425d7527f478954308576abcfab364bb9c33e2\" returns successfully" Sep 5 23:56:10.920137 containerd[1538]: time="2025-09-05T23:56:10.920088049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:56:11.093112 kubelet[2607]: I0905 23:56:11.092999 2607 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3e2115-205c-4a12-b2ec-acd6067f08c2" path="/var/lib/kubelet/pods/8c3e2115-205c-4a12-b2ec-acd6067f08c2/volumes" Sep 5 23:56:11.418173 systemd-networkd[1231]: calie26b734ff6e: Gained IPv6LL Sep 5 23:56:11.994091 systemd-networkd[1231]: vxlan.calico: Gained IPv6LL Sep 5 23:56:12.591894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088608536.mount: Deactivated successfully. Sep 5 23:56:12.609023 containerd[1538]: time="2025-09-05T23:56:12.608952174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:12.610246 containerd[1538]: time="2025-09-05T23:56:12.610211655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:56:12.612158 containerd[1538]: time="2025-09-05T23:56:12.611832377Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:12.614812 containerd[1538]: time="2025-09-05T23:56:12.614765341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:12.615691 containerd[1538]: time="2025-09-05T23:56:12.615663182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.695536413s" Sep 5 23:56:12.615763 containerd[1538]: time="2025-09-05T23:56:12.615696862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:56:12.619566 containerd[1538]: time="2025-09-05T23:56:12.619434026Z" level=info msg="CreateContainer within sandbox \"5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:56:12.630271 containerd[1538]: time="2025-09-05T23:56:12.629917599Z" level=info msg="CreateContainer within sandbox \"5e9788c53d42efb43e24eb284bbd8ea13422ab47be154bc7a0c86eb366716dce\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"712a55d420adb6a22fa94ca70075d8cab18d21b463a3cfbd88e7cea613111da7\"" Sep 5 23:56:12.630867 containerd[1538]: time="2025-09-05T23:56:12.630643520Z" level=info msg="StartContainer for \"712a55d420adb6a22fa94ca70075d8cab18d21b463a3cfbd88e7cea613111da7\"" Sep 5 23:56:12.695467 containerd[1538]: time="2025-09-05T23:56:12.695411398Z" level=info msg="StartContainer for \"712a55d420adb6a22fa94ca70075d8cab18d21b463a3cfbd88e7cea613111da7\" returns successfully" Sep 5 23:56:13.270570 kubelet[2607]: I0905 23:56:13.270301 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-67cbb9977f-7kvkn" podStartSLOduration=1.476253297 podStartE2EDuration="4.270282029s" podCreationTimestamp="2025-09-05 23:56:09 +0000 UTC" firstStartedPulling="2025-09-05 23:56:09.822793491 +0000 UTC m=+28.830032760" lastFinishedPulling="2025-09-05 23:56:12.616822183 +0000 UTC m=+31.624061492" observedRunningTime="2025-09-05 23:56:13.269887828 +0000 UTC m=+32.277127137" watchObservedRunningTime="2025-09-05 23:56:13.270282029 +0000 UTC m=+32.277521338" Sep 5 23:56:17.086751 containerd[1538]: time="2025-09-05T23:56:17.086362824Z" level=info msg="StopPodSandbox for \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\"" Sep 5 23:56:17.086751 containerd[1538]: time="2025-09-05T23:56:17.086448984Z" level=info msg="StopPodSandbox for \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\"" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.150 [INFO][4360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.151 [INFO][4360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" iface="eth0" netns="/var/run/netns/cni-50bb33c2-92d0-d030-9726-139b7b676906" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.151 [INFO][4360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" iface="eth0" netns="/var/run/netns/cni-50bb33c2-92d0-d030-9726-139b7b676906" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.152 [INFO][4360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" iface="eth0" netns="/var/run/netns/cni-50bb33c2-92d0-d030-9726-139b7b676906" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.152 [INFO][4360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.152 [INFO][4360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.178 [INFO][4378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.178 [INFO][4378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.178 [INFO][4378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.189 [WARNING][4378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.189 [INFO][4378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.191 [INFO][4378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:17.200668 containerd[1538]: 2025-09-05 23:56:17.197 [INFO][4360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:17.203220 containerd[1538]: time="2025-09-05T23:56:17.203085126Z" level=info msg="TearDown network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\" successfully" Sep 5 23:56:17.203220 containerd[1538]: time="2025-09-05T23:56:17.203121926Z" level=info msg="StopPodSandbox for \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\" returns successfully" Sep 5 23:56:17.204249 containerd[1538]: time="2025-09-05T23:56:17.203873126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55df5c4477-4fndx,Uid:604f5067-a6d0-4576-889f-1368c927e973,Namespace:calico-system,Attempt:1,}" Sep 5 23:56:17.204614 systemd[1]: run-netns-cni\x2d50bb33c2\x2d92d0\x2dd030\x2d9726\x2d139b7b676906.mount: Deactivated successfully. Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.150 [INFO][4359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.150 [INFO][4359] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" iface="eth0" netns="/var/run/netns/cni-d1ae05bb-d74c-4b01-8a63-90c615265c8f" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.151 [INFO][4359] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" iface="eth0" netns="/var/run/netns/cni-d1ae05bb-d74c-4b01-8a63-90c615265c8f" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.151 [INFO][4359] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" iface="eth0" netns="/var/run/netns/cni-d1ae05bb-d74c-4b01-8a63-90c615265c8f" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.151 [INFO][4359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.151 [INFO][4359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.178 [INFO][4376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.179 [INFO][4376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.191 [INFO][4376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.206 [WARNING][4376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.207 [INFO][4376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.210 [INFO][4376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:17.219684 containerd[1538]: 2025-09-05 23:56:17.215 [INFO][4359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:17.220488 containerd[1538]: time="2025-09-05T23:56:17.220130901Z" level=info msg="TearDown network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\" successfully" Sep 5 23:56:17.220488 containerd[1538]: time="2025-09-05T23:56:17.220158101Z" level=info msg="StopPodSandbox for \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\" returns successfully" Sep 5 23:56:17.221075 containerd[1538]: time="2025-09-05T23:56:17.221041821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-mxf96,Uid:882e113a-3ec9-4622-a5cc-bcbb76b3dde2,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:56:17.225559 systemd[1]: run-netns-cni\x2dd1ae05bb\x2dd74c\x2d4b01\x2d8a63\x2d90c615265c8f.mount: Deactivated successfully. Sep 5 23:56:17.330040 systemd-networkd[1231]: cali6d83e3534e3: Link UP Sep 5 23:56:17.331098 systemd-networkd[1231]: cali6d83e3534e3: Gained carrier Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.257 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0 calico-kube-controllers-55df5c4477- calico-system 604f5067-a6d0-4576-889f-1368c927e973 922 0 2025-09-05 23:55:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55df5c4477 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55df5c4477-4fndx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6d83e3534e3 [] [] }} ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.258 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.290 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" HandleID="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.290 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" HandleID="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55df5c4477-4fndx", "timestamp":"2025-09-05 23:56:17.290558762 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.290 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.291 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.291 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.299 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.304 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.308 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.310 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.312 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.313 [INFO][4420] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.315 [INFO][4420] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65 Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.319 [INFO][4420] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.324 [INFO][4420] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.324 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" host="localhost" Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.325 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:17.346781 containerd[1538]: 2025-09-05 23:56:17.325 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" HandleID="k8s-pod-network.9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.347324 containerd[1538]: 2025-09-05 23:56:17.328 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0", GenerateName:"calico-kube-controllers-55df5c4477-", Namespace:"calico-system", SelfLink:"", UID:"604f5067-a6d0-4576-889f-1368c927e973", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55df5c4477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55df5c4477-4fndx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d83e3534e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:17.347324 containerd[1538]: 2025-09-05 23:56:17.328 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.347324 containerd[1538]: 2025-09-05 23:56:17.328 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d83e3534e3 ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.347324 containerd[1538]: 2025-09-05 23:56:17.331 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.347324 containerd[1538]: 2025-09-05 23:56:17.331 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0", GenerateName:"calico-kube-controllers-55df5c4477-", Namespace:"calico-system", SelfLink:"", UID:"604f5067-a6d0-4576-889f-1368c927e973", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55df5c4477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65", Pod:"calico-kube-controllers-55df5c4477-4fndx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d83e3534e3", MAC:"42:13:05:40:61:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:17.347324 containerd[1538]: 2025-09-05 23:56:17.344 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65" Namespace="calico-system" Pod="calico-kube-controllers-55df5c4477-4fndx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:17.363199 containerd[1538]: time="2025-09-05T23:56:17.363110825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:17.363199 containerd[1538]: time="2025-09-05T23:56:17.363173185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:17.363199 containerd[1538]: time="2025-09-05T23:56:17.363192545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:17.363377 containerd[1538]: time="2025-09-05T23:56:17.363284065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:17.389311 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:17.419882 containerd[1538]: time="2025-09-05T23:56:17.419825515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55df5c4477-4fndx,Uid:604f5067-a6d0-4576-889f-1368c927e973,Namespace:calico-system,Attempt:1,} returns sandbox id \"9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65\"" Sep 5 23:56:17.422356 containerd[1538]: time="2025-09-05T23:56:17.422264717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:56:17.445778 systemd-networkd[1231]: cali72f18405429: Link UP Sep 5 23:56:17.446390 systemd-networkd[1231]: cali72f18405429: Gained carrier Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.299 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0 calico-apiserver-7847fb757b- calico-apiserver 882e113a-3ec9-4622-a5cc-bcbb76b3dde2 921 0 2025-09-05 23:55:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7847fb757b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7847fb757b-mxf96 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72f18405429 [] [] }} ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.299 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.332 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" HandleID="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.332 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" HandleID="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b1ce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7847fb757b-mxf96", "timestamp":"2025-09-05 23:56:17.332712679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.332 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.332 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.332 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.401 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.414 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.419 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.422 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.425 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.425 [INFO][4430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.427 [INFO][4430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87 Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.431 [INFO][4430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.442 [INFO][4430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.442 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" host="localhost" Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.442 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:17.461993 containerd[1538]: 2025-09-05 23:56:17.442 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" HandleID="k8s-pod-network.c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.462520 containerd[1538]: 2025-09-05 23:56:17.444 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"882e113a-3ec9-4622-a5cc-bcbb76b3dde2", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7847fb757b-mxf96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f18405429", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:17.462520 containerd[1538]: 2025-09-05 23:56:17.444 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.462520 containerd[1538]: 2025-09-05 23:56:17.444 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72f18405429 ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.462520 containerd[1538]: 2025-09-05 23:56:17.446 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.462520 containerd[1538]: 2025-09-05 23:56:17.446 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"882e113a-3ec9-4622-a5cc-bcbb76b3dde2", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87", Pod:"calico-apiserver-7847fb757b-mxf96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f18405429", MAC:"e6:3a:61:8a:07:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:17.462520 containerd[1538]: 2025-09-05 23:56:17.459 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-mxf96" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:17.479539 containerd[1538]: time="2025-09-05T23:56:17.479432246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:17.479539 containerd[1538]: time="2025-09-05T23:56:17.479521327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:17.479539 containerd[1538]: time="2025-09-05T23:56:17.479538087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:17.479805 containerd[1538]: time="2025-09-05T23:56:17.479756887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:17.512372 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:17.537075 containerd[1538]: time="2025-09-05T23:56:17.537035577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-mxf96,Uid:882e113a-3ec9-4622-a5cc-bcbb76b3dde2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87\"" Sep 5 23:56:18.086239 containerd[1538]: time="2025-09-05T23:56:18.086195850Z" level=info msg="StopPodSandbox for \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\"" Sep 5 23:56:18.086741 containerd[1538]: time="2025-09-05T23:56:18.086470291Z" level=info msg="StopPodSandbox for \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\"" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.140 [INFO][4567] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.140 [INFO][4567] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" iface="eth0" netns="/var/run/netns/cni-b330b8af-46b1-3d96-2d5e-45ba29eaf761" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.141 [INFO][4567] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" iface="eth0" netns="/var/run/netns/cni-b330b8af-46b1-3d96-2d5e-45ba29eaf761" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.141 [INFO][4567] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" iface="eth0" netns="/var/run/netns/cni-b330b8af-46b1-3d96-2d5e-45ba29eaf761" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.141 [INFO][4567] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.141 [INFO][4567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.167 [INFO][4578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.167 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.167 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.176 [WARNING][4578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.177 [INFO][4578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.179 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:18.183704 containerd[1538]: 2025-09-05 23:56:18.182 [INFO][4567] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:18.184614 containerd[1538]: time="2025-09-05T23:56:18.184491011Z" level=info msg="TearDown network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\" successfully" Sep 5 23:56:18.184614 containerd[1538]: time="2025-09-05T23:56:18.184526451Z" level=info msg="StopPodSandbox for \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\" returns successfully" Sep 5 23:56:18.184958 kubelet[2607]: E0905 23:56:18.184910 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:18.185331 containerd[1538]: time="2025-09-05T23:56:18.185297411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8wpj,Uid:8a5e4a42-edd8-4334-8fc9-cc5dbb79316a,Namespace:kube-system,Attempt:1,}" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.149 [INFO][4562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.150 [INFO][4562] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" iface="eth0" netns="/var/run/netns/cni-63d79c87-40bb-e42a-df86-ccb945c0a139" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.150 [INFO][4562] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" iface="eth0" netns="/var/run/netns/cni-63d79c87-40bb-e42a-df86-ccb945c0a139" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.150 [INFO][4562] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" iface="eth0" netns="/var/run/netns/cni-63d79c87-40bb-e42a-df86-ccb945c0a139" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.150 [INFO][4562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.150 [INFO][4562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.174 [INFO][4584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.174 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.179 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.191 [WARNING][4584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.191 [INFO][4584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.192 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:18.198594 containerd[1538]: 2025-09-05 23:56:18.194 [INFO][4562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:18.199674 containerd[1538]: time="2025-09-05T23:56:18.199301223Z" level=info msg="TearDown network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\" successfully" Sep 5 23:56:18.199674 containerd[1538]: time="2025-09-05T23:56:18.199339343Z" level=info msg="StopPodSandbox for \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\" returns successfully" Sep 5 23:56:18.200021 containerd[1538]: time="2025-09-05T23:56:18.199935303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-dnd4d,Uid:d96f52bf-df90-4603-8a34-8bdf02a0c13f,Namespace:calico-system,Attempt:1,}" Sep 5 23:56:18.208746 systemd[1]: run-netns-cni\x2db330b8af\x2d46b1\x2d3d96\x2d2d5e\x2d45ba29eaf761.mount: Deactivated successfully. Sep 5 23:56:18.209135 systemd[1]: run-netns-cni\x2d63d79c87\x2d40bb\x2de42a\x2ddf86\x2dccb945c0a139.mount: Deactivated successfully. Sep 5 23:56:18.361245 systemd-networkd[1231]: calie77b0138297: Link UP Sep 5 23:56:18.364298 systemd-networkd[1231]: calie77b0138297: Gained carrier Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.251 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--dnd4d-eth0 goldmane-7988f88666- calico-system d96f52bf-df90-4603-8a34-8bdf02a0c13f 936 0 2025-09-05 23:55:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-dnd4d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie77b0138297 [] [] }} ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.251 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.281 [INFO][4629] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" HandleID="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.282 [INFO][4629] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" HandleID="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-dnd4d", "timestamp":"2025-09-05 23:56:18.28164373 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.282 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.282 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.282 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.292 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.300 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.305 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.308 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.310 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.310 [INFO][4629] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.312 [INFO][4629] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0 Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.347 [INFO][4629] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.353 [INFO][4629] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.353 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" host="localhost" Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.353 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:18.379906 containerd[1538]: 2025-09-05 23:56:18.353 [INFO][4629] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" HandleID="k8s-pod-network.7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.380795 containerd[1538]: 2025-09-05 23:56:18.356 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--dnd4d-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d96f52bf-df90-4603-8a34-8bdf02a0c13f", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-dnd4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie77b0138297", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:18.380795 containerd[1538]: 2025-09-05 23:56:18.356 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.380795 containerd[1538]: 2025-09-05 23:56:18.356 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie77b0138297 ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.380795 containerd[1538]: 2025-09-05 23:56:18.366 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.380795 containerd[1538]: 2025-09-05 23:56:18.366 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--dnd4d-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d96f52bf-df90-4603-8a34-8bdf02a0c13f", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0", Pod:"goldmane-7988f88666-dnd4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie77b0138297", MAC:"6a:5e:cc:b5:a3:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:18.380795 containerd[1538]: 2025-09-05 23:56:18.377 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0" Namespace="calico-system" Pod="goldmane-7988f88666-dnd4d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:18.401012 containerd[1538]: time="2025-09-05T23:56:18.400747427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:18.401371 containerd[1538]: time="2025-09-05T23:56:18.401329468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:18.401535 containerd[1538]: time="2025-09-05T23:56:18.401507908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:18.402303 containerd[1538]: time="2025-09-05T23:56:18.402100748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:18.444723 systemd-networkd[1231]: cali005e1591508: Link UP Sep 5 23:56:18.444936 systemd-networkd[1231]: cali005e1591508: Gained carrier Sep 5 23:56:18.446635 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.247 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0 coredns-7c65d6cfc9- kube-system 8a5e4a42-edd8-4334-8fc9-cc5dbb79316a 935 0 2025-09-05 23:55:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-q8wpj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali005e1591508 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.248 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.292 [INFO][4623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" HandleID="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.292 [INFO][4623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" HandleID="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-q8wpj", "timestamp":"2025-09-05 23:56:18.292270939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.292 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.353 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.353 [INFO][4623] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.393 [INFO][4623] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.402 [INFO][4623] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.409 [INFO][4623] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.411 [INFO][4623] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.415 [INFO][4623] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.415 [INFO][4623] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.417 [INFO][4623] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.429 [INFO][4623] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.439 [INFO][4623] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.439 [INFO][4623] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" host="localhost" Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.439 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:18.474983 containerd[1538]: 2025-09-05 23:56:18.439 [INFO][4623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" HandleID="k8s-pod-network.db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.476092 containerd[1538]: 2025-09-05 23:56:18.442 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-q8wpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali005e1591508", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:18.476092 containerd[1538]: 2025-09-05 23:56:18.442 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.476092 containerd[1538]: 2025-09-05 23:56:18.442 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali005e1591508 ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.476092 containerd[1538]: 2025-09-05 23:56:18.444 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.476092 containerd[1538]: 2025-09-05 23:56:18.445 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e", Pod:"coredns-7c65d6cfc9-q8wpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali005e1591508", MAC:"fa:1e:88:7c:e4:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:18.476092 containerd[1538]: 2025-09-05 23:56:18.468 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8wpj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:18.486778 containerd[1538]: time="2025-09-05T23:56:18.486718657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-dnd4d,Uid:d96f52bf-df90-4603-8a34-8bdf02a0c13f,Namespace:calico-system,Attempt:1,} returns sandbox id \"7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0\"" Sep 5 23:56:18.516558 containerd[1538]: time="2025-09-05T23:56:18.515986801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:18.516804 containerd[1538]: time="2025-09-05T23:56:18.516667522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:18.516804 containerd[1538]: time="2025-09-05T23:56:18.516733162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:18.517098 containerd[1538]: time="2025-09-05T23:56:18.517029082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:18.539854 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:18.569074 containerd[1538]: time="2025-09-05T23:56:18.568881124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8wpj,Uid:8a5e4a42-edd8-4334-8fc9-cc5dbb79316a,Namespace:kube-system,Attempt:1,} returns sandbox id \"db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e\"" Sep 5 23:56:18.570145 kubelet[2607]: E0905 23:56:18.570109 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:18.573040 containerd[1538]: time="2025-09-05T23:56:18.573006088Z" level=info msg="CreateContainer within sandbox \"db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:56:18.694927 containerd[1538]: time="2025-09-05T23:56:18.694871867Z" level=info msg="CreateContainer within sandbox \"db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06937fc80a6b6655d508488321e6638ed66307e96c86a23f8c9d8457cdd361f2\"" Sep 5 23:56:18.695809 containerd[1538]: time="2025-09-05T23:56:18.695673988Z" level=info msg="StartContainer for \"06937fc80a6b6655d508488321e6638ed66307e96c86a23f8c9d8457cdd361f2\"" Sep 5 23:56:18.810629 containerd[1538]: time="2025-09-05T23:56:18.810454402Z" level=info msg="StartContainer for \"06937fc80a6b6655d508488321e6638ed66307e96c86a23f8c9d8457cdd361f2\" returns successfully" Sep 5 23:56:18.971086 systemd-networkd[1231]: cali6d83e3534e3: Gained IPv6LL Sep 5 23:56:19.086873 containerd[1538]: time="2025-09-05T23:56:19.086809743Z" level=info msg="StopPodSandbox for \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\"" Sep 5 23:56:19.087410 containerd[1538]: time="2025-09-05T23:56:19.087375543Z" level=info msg="StopPodSandbox for \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\"" Sep 5 23:56:19.163127 systemd-networkd[1231]: cali72f18405429: Gained IPv6LL Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.157 [INFO][4810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.157 [INFO][4810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" iface="eth0" netns="/var/run/netns/cni-6d3880ca-411f-6e2b-25c2-15990234d802" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.158 [INFO][4810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" iface="eth0" netns="/var/run/netns/cni-6d3880ca-411f-6e2b-25c2-15990234d802" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.159 [INFO][4810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" iface="eth0" netns="/var/run/netns/cni-6d3880ca-411f-6e2b-25c2-15990234d802" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.159 [INFO][4810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.159 [INFO][4810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.187 [INFO][4822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.187 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.187 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.197 [WARNING][4822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.197 [INFO][4822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.199 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:19.210000 containerd[1538]: 2025-09-05 23:56:19.203 [INFO][4810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:19.212539 systemd[1]: run-netns-cni\x2d6d3880ca\x2d411f\x2d6e2b\x2d25c2\x2d15990234d802.mount: Deactivated successfully. Sep 5 23:56:19.213628 containerd[1538]: time="2025-09-05T23:56:19.213453720Z" level=info msg="TearDown network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\" successfully" Sep 5 23:56:19.213628 containerd[1538]: time="2025-09-05T23:56:19.213526360Z" level=info msg="StopPodSandbox for \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\" returns successfully" Sep 5 23:56:19.213851 kubelet[2607]: E0905 23:56:19.213829 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:19.214334 containerd[1538]: time="2025-09-05T23:56:19.214214960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xp2hz,Uid:80ac692e-415b-4419-9e3c-f3f01df822d0,Namespace:kube-system,Attempt:1,}" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.167 [INFO][4805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.168 [INFO][4805] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" iface="eth0" netns="/var/run/netns/cni-7a6b1ab9-5528-a362-82fc-6380e1f71def" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.168 [INFO][4805] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" iface="eth0" netns="/var/run/netns/cni-7a6b1ab9-5528-a362-82fc-6380e1f71def" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.168 [INFO][4805] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" iface="eth0" netns="/var/run/netns/cni-7a6b1ab9-5528-a362-82fc-6380e1f71def" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.169 [INFO][4805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.169 [INFO][4805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.202 [INFO][4828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.203 [INFO][4828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.203 [INFO][4828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.215 [WARNING][4828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.215 [INFO][4828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.217 [INFO][4828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:19.221835 containerd[1538]: 2025-09-05 23:56:19.219 [INFO][4805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:19.231015 containerd[1538]: time="2025-09-05T23:56:19.229070572Z" level=info msg="TearDown network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\" successfully" Sep 5 23:56:19.231015 containerd[1538]: time="2025-09-05T23:56:19.229116372Z" level=info msg="StopPodSandbox for \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\" returns successfully" Sep 5 23:56:19.231015 containerd[1538]: time="2025-09-05T23:56:19.230511253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-kgbj8,Uid:7cbedf47-c538-40ff-85f2-c633bdff94ba,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:56:19.231647 systemd[1]: run-netns-cni\x2d7a6b1ab9\x2d5528\x2da362\x2d82fc\x2d6380e1f71def.mount: Deactivated successfully. Sep 5 23:56:19.318783 kubelet[2607]: E0905 23:56:19.318550 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:19.333252 kubelet[2607]: I0905 23:56:19.333161 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q8wpj" podStartSLOduration=33.333141892 podStartE2EDuration="33.333141892s" podCreationTimestamp="2025-09-05 23:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:56:19.332855931 +0000 UTC m=+38.340095240" watchObservedRunningTime="2025-09-05 23:56:19.333141892 +0000 UTC m=+38.340381201" Sep 5 23:56:19.430451 systemd-networkd[1231]: calib99e8adca02: Link UP Sep 5 23:56:19.431368 systemd-networkd[1231]: calib99e8adca02: Gained carrier Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.310 [INFO][4838] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0 coredns-7c65d6cfc9- kube-system 80ac692e-415b-4419-9e3c-f3f01df822d0 952 0 2025-09-05 23:55:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-xp2hz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib99e8adca02 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.310 [INFO][4838] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.373 [INFO][4869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" HandleID="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.374 [INFO][4869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" HandleID="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-xp2hz", "timestamp":"2025-09-05 23:56:19.373559802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.374 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.374 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.374 [INFO][4869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.384 [INFO][4869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.394 [INFO][4869] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.401 [INFO][4869] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.404 [INFO][4869] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.407 [INFO][4869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.407 [INFO][4869] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.409 [INFO][4869] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.415 [INFO][4869] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.424 [INFO][4869] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.424 [INFO][4869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" host="localhost" Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.424 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:19.449341 containerd[1538]: 2025-09-05 23:56:19.424 [INFO][4869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" HandleID="k8s-pod-network.62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.449867 containerd[1538]: 2025-09-05 23:56:19.427 [INFO][4838] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80ac692e-415b-4419-9e3c-f3f01df822d0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-xp2hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib99e8adca02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:19.449867 containerd[1538]: 2025-09-05 23:56:19.427 [INFO][4838] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.449867 containerd[1538]: 2025-09-05 23:56:19.427 [INFO][4838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib99e8adca02 ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.449867 containerd[1538]: 2025-09-05 23:56:19.433 [INFO][4838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.449867 containerd[1538]: 2025-09-05 23:56:19.434 [INFO][4838] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80ac692e-415b-4419-9e3c-f3f01df822d0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f", Pod:"coredns-7c65d6cfc9-xp2hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib99e8adca02", MAC:"e2:51:0e:04:ff:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:19.449867 containerd[1538]: 2025-09-05 23:56:19.446 [INFO][4838] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xp2hz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:19.478141 containerd[1538]: time="2025-09-05T23:56:19.477592922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:19.478141 containerd[1538]: time="2025-09-05T23:56:19.477645882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:19.478141 containerd[1538]: time="2025-09-05T23:56:19.477656602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:19.478141 containerd[1538]: time="2025-09-05T23:56:19.477737522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:19.486644 containerd[1538]: time="2025-09-05T23:56:19.486567609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:19.487617 containerd[1538]: time="2025-09-05T23:56:19.487564690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:56:19.489974 containerd[1538]: time="2025-09-05T23:56:19.489510851Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:19.492870 containerd[1538]: time="2025-09-05T23:56:19.492764254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:19.493619 containerd[1538]: time="2025-09-05T23:56:19.493572694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.071260457s" Sep 5 23:56:19.493718 containerd[1538]: time="2025-09-05T23:56:19.493701694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:56:19.501507 containerd[1538]: time="2025-09-05T23:56:19.500891780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:56:19.507329 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:19.508562 containerd[1538]: time="2025-09-05T23:56:19.508527106Z" level=info msg="CreateContainer within sandbox \"9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:56:19.536544 containerd[1538]: time="2025-09-05T23:56:19.536508807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xp2hz,Uid:80ac692e-415b-4419-9e3c-f3f01df822d0,Namespace:kube-system,Attempt:1,} returns sandbox id \"62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f\"" Sep 5 23:56:19.537385 kubelet[2607]: E0905 23:56:19.537311 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:19.538234 containerd[1538]: time="2025-09-05T23:56:19.538014448Z" level=info msg="CreateContainer within sandbox \"9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e27e96116f46186b4bbbdde4de993f363740b61c94d5061cad503b4d8d28184e\"" Sep 5 23:56:19.540110 containerd[1538]: time="2025-09-05T23:56:19.539608450Z" level=info msg="CreateContainer within sandbox \"62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:56:19.540110 containerd[1538]: time="2025-09-05T23:56:19.539890170Z" level=info msg="StartContainer for \"e27e96116f46186b4bbbdde4de993f363740b61c94d5061cad503b4d8d28184e\"" Sep 5 23:56:19.549585 systemd-networkd[1231]: calibc691184605: Link UP Sep 5 23:56:19.550607 systemd-networkd[1231]: calibc691184605: Gained carrier Sep 5 23:56:19.558399 containerd[1538]: time="2025-09-05T23:56:19.558350384Z" level=info msg="CreateContainer within sandbox \"62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8585b9b98f163e0b64ad6020a8812d518b65cf04ae819b451cd87036270cfad8\"" Sep 5 23:56:19.559238 containerd[1538]: time="2025-09-05T23:56:19.559209745Z" level=info msg="StartContainer for \"8585b9b98f163e0b64ad6020a8812d518b65cf04ae819b451cd87036270cfad8\"" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.330 [INFO][4850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0 calico-apiserver-7847fb757b- calico-apiserver 7cbedf47-c538-40ff-85f2-c633bdff94ba 953 0 2025-09-05 23:55:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7847fb757b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7847fb757b-kgbj8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc691184605 [] [] }} ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.330 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.376 [INFO][4879] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" HandleID="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.376 [INFO][4879] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" HandleID="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005128b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7847fb757b-kgbj8", "timestamp":"2025-09-05 23:56:19.376759125 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.376 [INFO][4879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.424 [INFO][4879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.424 [INFO][4879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.485 [INFO][4879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.497 [INFO][4879] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.509 [INFO][4879] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.511 [INFO][4879] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.516 [INFO][4879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.516 [INFO][4879] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.523 [INFO][4879] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0 Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.529 [INFO][4879] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.540 [INFO][4879] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.540 [INFO][4879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" host="localhost" Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.540 [INFO][4879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:19.568481 containerd[1538]: 2025-09-05 23:56:19.540 [INFO][4879] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" HandleID="k8s-pod-network.2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.569025 containerd[1538]: 2025-09-05 23:56:19.545 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cbedf47-c538-40ff-85f2-c633bdff94ba", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7847fb757b-kgbj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc691184605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:19.569025 containerd[1538]: 2025-09-05 23:56:19.545 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.569025 containerd[1538]: 2025-09-05 23:56:19.545 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc691184605 ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.569025 containerd[1538]: 2025-09-05 23:56:19.551 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.569025 containerd[1538]: 2025-09-05 23:56:19.551 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cbedf47-c538-40ff-85f2-c633bdff94ba", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0", Pod:"calico-apiserver-7847fb757b-kgbj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc691184605", MAC:"6a:5e:1e:52:c6:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:19.569025 containerd[1538]: 2025-09-05 23:56:19.563 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0" Namespace="calico-apiserver" Pod="calico-apiserver-7847fb757b-kgbj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:19.598320 containerd[1538]: time="2025-09-05T23:56:19.598043814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:19.598320 containerd[1538]: time="2025-09-05T23:56:19.598137094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:19.598320 containerd[1538]: time="2025-09-05T23:56:19.598148934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:19.598320 containerd[1538]: time="2025-09-05T23:56:19.598282054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:19.630929 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:19.634033 containerd[1538]: time="2025-09-05T23:56:19.633992762Z" level=info msg="StartContainer for \"e27e96116f46186b4bbbdde4de993f363740b61c94d5061cad503b4d8d28184e\" returns successfully" Sep 5 23:56:19.640954 containerd[1538]: time="2025-09-05T23:56:19.640193207Z" level=info msg="StartContainer for \"8585b9b98f163e0b64ad6020a8812d518b65cf04ae819b451cd87036270cfad8\" returns successfully" Sep 5 23:56:19.667705 containerd[1538]: time="2025-09-05T23:56:19.667669148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7847fb757b-kgbj8,Uid:7cbedf47-c538-40ff-85f2-c633bdff94ba,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0\"" Sep 5 23:56:19.675149 systemd-networkd[1231]: cali005e1591508: Gained IPv6LL Sep 5 23:56:20.090171 containerd[1538]: time="2025-09-05T23:56:20.086045464Z" level=info msg="StopPodSandbox for \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\"" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.156 [INFO][5098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.157 [INFO][5098] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" iface="eth0" netns="/var/run/netns/cni-3a4137ca-0f9f-1e54-02ac-cdc0b725d998" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.157 [INFO][5098] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" iface="eth0" netns="/var/run/netns/cni-3a4137ca-0f9f-1e54-02ac-cdc0b725d998" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.157 [INFO][5098] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" iface="eth0" netns="/var/run/netns/cni-3a4137ca-0f9f-1e54-02ac-cdc0b725d998" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.157 [INFO][5098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.157 [INFO][5098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.180 [INFO][5106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.180 [INFO][5106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.180 [INFO][5106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.190 [WARNING][5106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.190 [INFO][5106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.193 [INFO][5106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:20.198166 containerd[1538]: 2025-09-05 23:56:20.195 [INFO][5098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:20.198166 containerd[1538]: time="2025-09-05T23:56:20.197991104Z" level=info msg="TearDown network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\" successfully" Sep 5 23:56:20.198166 containerd[1538]: time="2025-09-05T23:56:20.198020144Z" level=info msg="StopPodSandbox for \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\" returns successfully" Sep 5 23:56:20.199540 containerd[1538]: time="2025-09-05T23:56:20.198819145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2czxt,Uid:3040e6c7-ae59-4353-9f35-08b7bd7a921f,Namespace:calico-system,Attempt:1,}" Sep 5 23:56:20.210769 systemd[1]: run-netns-cni\x2d3a4137ca\x2d0f9f\x2d1e54\x2d02ac\x2dcdc0b725d998.mount: Deactivated successfully. Sep 5 23:56:20.323156 kubelet[2607]: E0905 23:56:20.323115 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:20.333979 kubelet[2607]: E0905 23:56:20.333927 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:20.339086 kubelet[2607]: I0905 23:56:20.338361 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xp2hz" podStartSLOduration=34.338346125 podStartE2EDuration="34.338346125s" podCreationTimestamp="2025-09-05 23:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:56:20.336998964 +0000 UTC m=+39.344238233" watchObservedRunningTime="2025-09-05 23:56:20.338346125 +0000 UTC m=+39.345585394" Sep 5 23:56:20.378203 systemd-networkd[1231]: calie77b0138297: Gained IPv6LL Sep 5 23:56:20.507199 systemd-networkd[1231]: calic6ae5378fe8: Link UP Sep 5 23:56:20.508881 systemd-networkd[1231]: calic6ae5378fe8: Gained carrier Sep 5 23:56:20.524631 kubelet[2607]: I0905 23:56:20.524148 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55df5c4477-4fndx" podStartSLOduration=20.451257199 podStartE2EDuration="22.524128898s" podCreationTimestamp="2025-09-05 23:55:58 +0000 UTC" firstStartedPulling="2025-09-05 23:56:17.421690956 +0000 UTC m=+36.428930265" lastFinishedPulling="2025-09-05 23:56:19.494562655 +0000 UTC m=+38.501801964" observedRunningTime="2025-09-05 23:56:20.37410303 +0000 UTC m=+39.381342339" watchObservedRunningTime="2025-09-05 23:56:20.524128898 +0000 UTC m=+39.531368207" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.388 [INFO][5114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2czxt-eth0 csi-node-driver- calico-system 3040e6c7-ae59-4353-9f35-08b7bd7a921f 985 0 2025-09-05 23:55:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2czxt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic6ae5378fe8 [] [] }} ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.388 [INFO][5114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.412 [INFO][5130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" HandleID="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.413 [INFO][5130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" HandleID="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2czxt", "timestamp":"2025-09-05 23:56:20.412845898 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.413 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.413 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.413 [INFO][5130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.423 [INFO][5130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.430 [INFO][5130] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.438 [INFO][5130] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.440 [INFO][5130] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.451 [INFO][5130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.451 [INFO][5130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.462 [INFO][5130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.474 [INFO][5130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.487 [INFO][5130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.487 [INFO][5130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" host="localhost" Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.487 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:20.529260 containerd[1538]: 2025-09-05 23:56:20.487 [INFO][5130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" HandleID="k8s-pod-network.217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.535647 containerd[1538]: 2025-09-05 23:56:20.500 [INFO][5114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2czxt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3040e6c7-ae59-4353-9f35-08b7bd7a921f", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2czxt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ae5378fe8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:20.535647 containerd[1538]: 2025-09-05 23:56:20.501 [INFO][5114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.535647 containerd[1538]: 2025-09-05 23:56:20.501 [INFO][5114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6ae5378fe8 ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.535647 containerd[1538]: 2025-09-05 23:56:20.509 [INFO][5114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.535647 containerd[1538]: 2025-09-05 23:56:20.511 [INFO][5114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2czxt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3040e6c7-ae59-4353-9f35-08b7bd7a921f", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f", Pod:"csi-node-driver-2czxt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ae5378fe8", MAC:"5a:66:58:c9:23:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:20.535647 containerd[1538]: 2025-09-05 23:56:20.524 [INFO][5114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f" Namespace="calico-system" Pod="csi-node-driver-2czxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:20.551414 containerd[1538]: time="2025-09-05T23:56:20.551148518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:20.551414 containerd[1538]: time="2025-09-05T23:56:20.551206158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:20.551414 containerd[1538]: time="2025-09-05T23:56:20.551217478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:20.551414 containerd[1538]: time="2025-09-05T23:56:20.551301398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:20.576356 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:56:20.598673 containerd[1538]: time="2025-09-05T23:56:20.598532672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2czxt,Uid:3040e6c7-ae59-4353-9f35-08b7bd7a921f,Namespace:calico-system,Attempt:1,} returns sandbox id \"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f\"" Sep 5 23:56:20.763487 systemd-networkd[1231]: calib99e8adca02: Gained IPv6LL Sep 5 23:56:21.319509 containerd[1538]: time="2025-09-05T23:56:21.318993374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:21.319821 containerd[1538]: time="2025-09-05T23:56:21.319798415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:56:21.320809 containerd[1538]: time="2025-09-05T23:56:21.320785056Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:21.323120 containerd[1538]: time="2025-09-05T23:56:21.323089017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:21.323985 containerd[1538]: time="2025-09-05T23:56:21.323758338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.822126638s" Sep 5 23:56:21.323985 containerd[1538]: time="2025-09-05T23:56:21.323786418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:56:21.326011 containerd[1538]: time="2025-09-05T23:56:21.324782218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:56:21.326011 containerd[1538]: time="2025-09-05T23:56:21.325764779Z" level=info msg="CreateContainer within sandbox \"c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:56:21.337818 containerd[1538]: time="2025-09-05T23:56:21.337755187Z" level=info msg="CreateContainer within sandbox \"c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e4f0368616930234327317b1119c58fe639188f806470dbcda8cd68538774d6\"" Sep 5 23:56:21.338152 containerd[1538]: time="2025-09-05T23:56:21.338121387Z" level=info msg="StartContainer for \"6e4f0368616930234327317b1119c58fe639188f806470dbcda8cd68538774d6\"" Sep 5 23:56:21.340330 kubelet[2607]: E0905 23:56:21.340027 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:21.340330 kubelet[2607]: E0905 23:56:21.340043 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:21.340330 kubelet[2607]: I0905 23:56:21.340067 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:56:21.403811 containerd[1538]: time="2025-09-05T23:56:21.403689071Z" level=info msg="StartContainer for \"6e4f0368616930234327317b1119c58fe639188f806470dbcda8cd68538774d6\" returns successfully" Sep 5 23:56:21.530207 systemd-networkd[1231]: calibc691184605: Gained IPv6LL Sep 5 23:56:21.594231 systemd-networkd[1231]: calic6ae5378fe8: Gained IPv6LL Sep 5 23:56:22.344467 kubelet[2607]: E0905 23:56:22.344432 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:56:22.360509 kubelet[2607]: I0905 23:56:22.360384 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7847fb757b-mxf96" podStartSLOduration=24.57421746 podStartE2EDuration="28.3603659s" podCreationTimestamp="2025-09-05 23:55:54 +0000 UTC" firstStartedPulling="2025-09-05 23:56:17.538391858 +0000 UTC m=+36.545631167" lastFinishedPulling="2025-09-05 23:56:21.324540298 +0000 UTC m=+40.331779607" observedRunningTime="2025-09-05 23:56:22.359429419 +0000 UTC m=+41.366668728" watchObservedRunningTime="2025-09-05 23:56:22.3603659 +0000 UTC m=+41.367605249" Sep 5 23:56:22.793580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720569923.mount: Deactivated successfully. Sep 5 23:56:23.334215 containerd[1538]: time="2025-09-05T23:56:23.334153741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:23.337714 containerd[1538]: time="2025-09-05T23:56:23.337639743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:56:23.339469 containerd[1538]: time="2025-09-05T23:56:23.339189784Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:23.344797 containerd[1538]: time="2025-09-05T23:56:23.344760947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:23.345460 kubelet[2607]: I0905 23:56:23.345391 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:56:23.346256 containerd[1538]: time="2025-09-05T23:56:23.345849268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.02103449s" Sep 5 23:56:23.346256 containerd[1538]: time="2025-09-05T23:56:23.345883188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:56:23.347536 containerd[1538]: time="2025-09-05T23:56:23.347297029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:56:23.348619 containerd[1538]: time="2025-09-05T23:56:23.348518749Z" level=info msg="CreateContainer within sandbox \"7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:56:23.363928 containerd[1538]: time="2025-09-05T23:56:23.363849838Z" level=info msg="CreateContainer within sandbox \"7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6382759670621cb3f24caec19f472d6d7daa311e562b22170ab39bfbd4e3c75f\"" Sep 5 23:56:23.364476 containerd[1538]: time="2025-09-05T23:56:23.364377599Z" level=info msg="StartContainer for \"6382759670621cb3f24caec19f472d6d7daa311e562b22170ab39bfbd4e3c75f\"" Sep 5 23:56:23.495475 containerd[1538]: time="2025-09-05T23:56:23.494038435Z" level=info msg="StartContainer for \"6382759670621cb3f24caec19f472d6d7daa311e562b22170ab39bfbd4e3c75f\" returns successfully" Sep 5 23:56:23.655225 containerd[1538]: time="2025-09-05T23:56:23.655134251Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:23.656655 containerd[1538]: time="2025-09-05T23:56:23.656617692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:56:23.658535 containerd[1538]: time="2025-09-05T23:56:23.658486413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 311.150024ms" Sep 5 23:56:23.658535 containerd[1538]: time="2025-09-05T23:56:23.658522453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:56:23.662862 containerd[1538]: time="2025-09-05T23:56:23.662699175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:56:23.663680 containerd[1538]: time="2025-09-05T23:56:23.663653176Z" level=info msg="CreateContainer within sandbox \"2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:56:23.675138 containerd[1538]: time="2025-09-05T23:56:23.675007622Z" level=info msg="CreateContainer within sandbox \"2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ccc0178ef01ea251968272fcc8e9bb74d6d0700183d1e20e52e7513bc60f2844\"" Sep 5 23:56:23.675502 containerd[1538]: time="2025-09-05T23:56:23.675461663Z" level=info msg="StartContainer for \"ccc0178ef01ea251968272fcc8e9bb74d6d0700183d1e20e52e7513bc60f2844\"" Sep 5 23:56:23.734271 containerd[1538]: time="2025-09-05T23:56:23.734117737Z" level=info msg="StartContainer for \"ccc0178ef01ea251968272fcc8e9bb74d6d0700183d1e20e52e7513bc60f2844\" returns successfully" Sep 5 23:56:24.390761 kubelet[2607]: I0905 23:56:24.390119 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7847fb757b-kgbj8" podStartSLOduration=26.398621966 podStartE2EDuration="30.390098271s" podCreationTimestamp="2025-09-05 23:55:54 +0000 UTC" firstStartedPulling="2025-09-05 23:56:19.67086235 +0000 UTC m=+38.678101659" lastFinishedPulling="2025-09-05 23:56:23.662338655 +0000 UTC m=+42.669577964" observedRunningTime="2025-09-05 23:56:24.376310503 +0000 UTC m=+43.383549812" watchObservedRunningTime="2025-09-05 23:56:24.390098271 +0000 UTC m=+43.397337580" Sep 5 23:56:24.708910 containerd[1538]: time="2025-09-05T23:56:24.708869008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:24.710426 containerd[1538]: time="2025-09-05T23:56:24.710396569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:56:24.711516 containerd[1538]: time="2025-09-05T23:56:24.711491929Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:24.714470 containerd[1538]: time="2025-09-05T23:56:24.714435051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:24.715467 containerd[1538]: time="2025-09-05T23:56:24.715145611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.052273956s" Sep 5 23:56:24.715467 containerd[1538]: time="2025-09-05T23:56:24.715201691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:56:24.723033 containerd[1538]: time="2025-09-05T23:56:24.722997575Z" level=info msg="CreateContainer within sandbox \"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:56:24.754342 containerd[1538]: time="2025-09-05T23:56:24.754300153Z" level=info msg="CreateContainer within sandbox \"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7cb6b32493a1fcf8078fa28dd00768bb8be4ef8aa81a1e5e7d89a3401de8cc9b\"" Sep 5 23:56:24.757668 containerd[1538]: time="2025-09-05T23:56:24.757290555Z" level=info msg="StartContainer for \"7cb6b32493a1fcf8078fa28dd00768bb8be4ef8aa81a1e5e7d89a3401de8cc9b\"" Sep 5 23:56:24.814511 containerd[1538]: time="2025-09-05T23:56:24.814466346Z" level=info msg="StartContainer for \"7cb6b32493a1fcf8078fa28dd00768bb8be4ef8aa81a1e5e7d89a3401de8cc9b\" returns successfully" Sep 5 23:56:24.815707 containerd[1538]: time="2025-09-05T23:56:24.815684347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:56:25.369586 kubelet[2607]: I0905 23:56:25.369549 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:56:25.541070 kubelet[2607]: I0905 23:56:25.540997 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-dnd4d" podStartSLOduration=22.683901282 podStartE2EDuration="27.54097805s" podCreationTimestamp="2025-09-05 23:55:58 +0000 UTC" firstStartedPulling="2025-09-05 23:56:18.4897759 +0000 UTC m=+37.497015209" lastFinishedPulling="2025-09-05 23:56:23.346852668 +0000 UTC m=+42.354091977" observedRunningTime="2025-09-05 23:56:24.392048832 +0000 UTC m=+43.399288141" watchObservedRunningTime="2025-09-05 23:56:25.54097805 +0000 UTC m=+44.548217359" Sep 5 23:56:25.976678 containerd[1538]: time="2025-09-05T23:56:25.976628917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:25.977854 containerd[1538]: time="2025-09-05T23:56:25.977665237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:56:25.979032 containerd[1538]: time="2025-09-05T23:56:25.978724918Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:25.980989 containerd[1538]: time="2025-09-05T23:56:25.980879719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:25.981821 containerd[1538]: time="2025-09-05T23:56:25.981695279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.165974372s" Sep 5 23:56:25.981821 containerd[1538]: time="2025-09-05T23:56:25.981732679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:56:25.983861 containerd[1538]: time="2025-09-05T23:56:25.983746200Z" level=info msg="CreateContainer within sandbox \"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:56:25.998086 containerd[1538]: time="2025-09-05T23:56:25.998042448Z" level=info msg="CreateContainer within sandbox \"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03cb8c2b83c0ff430854edb69454fc3cd9d559ca6814fc4e962f405005c8d6e8\"" Sep 5 23:56:25.999302 containerd[1538]: time="2025-09-05T23:56:25.999273248Z" level=info msg="StartContainer for \"03cb8c2b83c0ff430854edb69454fc3cd9d559ca6814fc4e962f405005c8d6e8\"" Sep 5 23:56:26.075314 containerd[1538]: time="2025-09-05T23:56:26.075270326Z" level=info msg="StartContainer for \"03cb8c2b83c0ff430854edb69454fc3cd9d559ca6814fc4e962f405005c8d6e8\" returns successfully" Sep 5 23:56:26.176042 kubelet[2607]: I0905 23:56:26.175943 2607 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:56:26.182269 kubelet[2607]: I0905 23:56:26.182243 2607 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:56:26.262387 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:60298.service - OpenSSH per-connection server daemon (10.0.0.1:60298). Sep 5 23:56:26.341057 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 60298 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:26.345227 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:26.352037 systemd-logind[1510]: New session 8 of user core. Sep 5 23:56:26.358278 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:56:26.788205 sshd[5429]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:26.791424 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:60298.service: Deactivated successfully. Sep 5 23:56:26.794244 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:56:26.794719 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:56:26.799007 systemd-logind[1510]: Removed session 8. Sep 5 23:56:26.969034 kubelet[2607]: I0905 23:56:26.968635 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:56:26.998624 systemd[1]: run-containerd-runc-k8s.io-e27e96116f46186b4bbbdde4de993f363740b61c94d5061cad503b4d8d28184e-runc.azpcEz.mount: Deactivated successfully. Sep 5 23:56:27.036593 kubelet[2607]: I0905 23:56:27.036374 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2czxt" podStartSLOduration=23.654658146 podStartE2EDuration="29.036349313s" podCreationTimestamp="2025-09-05 23:55:58 +0000 UTC" firstStartedPulling="2025-09-05 23:56:20.600815473 +0000 UTC m=+39.608054782" lastFinishedPulling="2025-09-05 23:56:25.98250664 +0000 UTC m=+44.989745949" observedRunningTime="2025-09-05 23:56:26.387406918 +0000 UTC m=+45.394646227" watchObservedRunningTime="2025-09-05 23:56:27.036349313 +0000 UTC m=+46.043588622" Sep 5 23:56:27.535067 kubelet[2607]: I0905 23:56:27.534591 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:56:31.803263 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:50752.service - OpenSSH per-connection server daemon (10.0.0.1:50752). Sep 5 23:56:31.859030 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 50752 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:31.861949 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:31.865840 systemd-logind[1510]: New session 9 of user core. Sep 5 23:56:31.871324 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:56:32.131059 sshd[5560]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:32.133754 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:50752.service: Deactivated successfully. Sep 5 23:56:32.137747 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:56:32.138116 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:56:32.143238 systemd-logind[1510]: Removed session 9. Sep 5 23:56:37.141212 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:50758.service - OpenSSH per-connection server daemon (10.0.0.1:50758). Sep 5 23:56:37.176220 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 50758 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:37.177608 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:37.181858 systemd-logind[1510]: New session 10 of user core. Sep 5 23:56:37.191484 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:56:37.348316 sshd[5577]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:37.351240 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:50758.service: Deactivated successfully. Sep 5 23:56:37.357476 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:56:37.358936 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:56:37.368852 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:50766.service - OpenSSH per-connection server daemon (10.0.0.1:50766). Sep 5 23:56:37.370093 systemd-logind[1510]: Removed session 10. Sep 5 23:56:37.406267 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 50766 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:37.408127 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:37.412116 systemd-logind[1510]: New session 11 of user core. Sep 5 23:56:37.424241 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:56:37.658929 sshd[5593]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:37.670525 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:50778.service - OpenSSH per-connection server daemon (10.0.0.1:50778). Sep 5 23:56:37.671092 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:50766.service: Deactivated successfully. Sep 5 23:56:37.677543 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:56:37.681298 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:56:37.684662 systemd-logind[1510]: Removed session 11. Sep 5 23:56:37.717162 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 50778 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:37.718674 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:37.723597 systemd-logind[1510]: New session 12 of user core. Sep 5 23:56:37.731256 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:56:37.862883 sshd[5603]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:37.867052 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:50778.service: Deactivated successfully. Sep 5 23:56:37.869554 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:56:37.869569 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:56:37.870581 systemd-logind[1510]: Removed session 12. Sep 5 23:56:41.067660 containerd[1538]: time="2025-09-05T23:56:41.067571977Z" level=info msg="StopPodSandbox for \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\"" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.114 [WARNING][5637] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cbedf47-c538-40ff-85f2-c633bdff94ba", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0", Pod:"calico-apiserver-7847fb757b-kgbj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc691184605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.114 [INFO][5637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.114 [INFO][5637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" iface="eth0" netns="" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.114 [INFO][5637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.114 [INFO][5637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.135 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.136 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.136 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.147 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.147 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.149 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.153403 containerd[1538]: 2025-09-05 23:56:41.151 [INFO][5637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.153403 containerd[1538]: time="2025-09-05T23:56:41.153299632Z" level=info msg="TearDown network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\" successfully" Sep 5 23:56:41.153403 containerd[1538]: time="2025-09-05T23:56:41.153322992Z" level=info msg="StopPodSandbox for \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\" returns successfully" Sep 5 23:56:41.154150 containerd[1538]: time="2025-09-05T23:56:41.153734232Z" level=info msg="RemovePodSandbox for \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\"" Sep 5 23:56:41.160418 containerd[1538]: time="2025-09-05T23:56:41.160375274Z" level=info msg="Forcibly stopping sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\"" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.194 [WARNING][5666] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cbedf47-c538-40ff-85f2-c633bdff94ba", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a6c5e0572b2a58ea882610595adf21579df5e676b3c2cd1c4fa8b979c229de0", Pod:"calico-apiserver-7847fb757b-kgbj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc691184605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.194 [INFO][5666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.194 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" iface="eth0" netns="" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.194 [INFO][5666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.194 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.213 [INFO][5675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.213 [INFO][5675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.213 [INFO][5675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.221 [WARNING][5675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.221 [INFO][5675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" HandleID="k8s-pod-network.16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Workload="localhost-k8s-calico--apiserver--7847fb757b--kgbj8-eth0" Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.224 [INFO][5675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.227918 containerd[1538]: 2025-09-05 23:56:41.225 [INFO][5666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25" Sep 5 23:56:41.228620 containerd[1538]: time="2025-09-05T23:56:41.227894846Z" level=info msg="TearDown network for sandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\" successfully" Sep 5 23:56:41.235790 containerd[1538]: time="2025-09-05T23:56:41.235739408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:41.235884 containerd[1538]: time="2025-09-05T23:56:41.235832608Z" level=info msg="RemovePodSandbox \"16709e5cf0746bf9957b9e1b24d4be00a1420b3c6f9d24fe475405309779bf25\" returns successfully" Sep 5 23:56:41.236372 containerd[1538]: time="2025-09-05T23:56:41.236350408Z" level=info msg="StopPodSandbox for \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\"" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.269 [WARNING][5693] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2czxt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3040e6c7-ae59-4353-9f35-08b7bd7a921f", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f", Pod:"csi-node-driver-2czxt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ae5378fe8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.270 [INFO][5693] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.270 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" iface="eth0" netns="" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.270 [INFO][5693] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.270 [INFO][5693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.294 [INFO][5702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.294 [INFO][5702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.294 [INFO][5702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.303 [WARNING][5702] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.303 [INFO][5702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.305 [INFO][5702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.309457 containerd[1538]: 2025-09-05 23:56:41.307 [INFO][5693] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.310118 containerd[1538]: time="2025-09-05T23:56:41.309493701Z" level=info msg="TearDown network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\" successfully" Sep 5 23:56:41.310118 containerd[1538]: time="2025-09-05T23:56:41.309521861Z" level=info msg="StopPodSandbox for \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\" returns successfully" Sep 5 23:56:41.310118 containerd[1538]: time="2025-09-05T23:56:41.310048981Z" level=info msg="RemovePodSandbox for \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\"" Sep 5 23:56:41.310118 containerd[1538]: time="2025-09-05T23:56:41.310076141Z" level=info msg="Forcibly stopping sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\"" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.348 [WARNING][5720] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2czxt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3040e6c7-ae59-4353-9f35-08b7bd7a921f", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"217b377184e7ed261ddbe3a9b64614d2b2d5f4c9ce6e3cae513220a4e23f144f", Pod:"csi-node-driver-2czxt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6ae5378fe8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.348 [INFO][5720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.348 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" iface="eth0" netns="" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.348 [INFO][5720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.348 [INFO][5720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.378 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.379 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.379 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.388 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.388 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" HandleID="k8s-pod-network.1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Workload="localhost-k8s-csi--node--driver--2czxt-eth0" Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.389 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.393908 containerd[1538]: 2025-09-05 23:56:41.391 [INFO][5720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c" Sep 5 23:56:41.395503 containerd[1538]: time="2025-09-05T23:56:41.394355237Z" level=info msg="TearDown network for sandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\" successfully" Sep 5 23:56:41.402817 containerd[1538]: time="2025-09-05T23:56:41.402774679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:41.403039 containerd[1538]: time="2025-09-05T23:56:41.403019839Z" level=info msg="RemovePodSandbox \"1c0fb7b0a55d588b36da70b7b0aed5300295c33f0e57bd3d0057917d0eefc61c\" returns successfully" Sep 5 23:56:41.403537 containerd[1538]: time="2025-09-05T23:56:41.403511199Z" level=info msg="StopPodSandbox for \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\"" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.445 [WARNING][5747] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" WorkloadEndpoint="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.445 [INFO][5747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.445 [INFO][5747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" iface="eth0" netns="" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.445 [INFO][5747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.445 [INFO][5747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.463 [INFO][5757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.463 [INFO][5757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.463 [INFO][5757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.473 [WARNING][5757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.473 [INFO][5757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.474 [INFO][5757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.478621 containerd[1538]: 2025-09-05 23:56:41.476 [INFO][5747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.479058 containerd[1538]: time="2025-09-05T23:56:41.478665213Z" level=info msg="TearDown network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\" successfully" Sep 5 23:56:41.479058 containerd[1538]: time="2025-09-05T23:56:41.478692893Z" level=info msg="StopPodSandbox for \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\" returns successfully" Sep 5 23:56:41.479164 containerd[1538]: time="2025-09-05T23:56:41.479135093Z" level=info msg="RemovePodSandbox for \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\"" Sep 5 23:56:41.479202 containerd[1538]: time="2025-09-05T23:56:41.479172533Z" level=info msg="Forcibly stopping sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\"" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.513 [WARNING][5775] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" WorkloadEndpoint="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.513 [INFO][5775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.513 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" iface="eth0" netns="" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.513 [INFO][5775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.513 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.533 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.533 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.533 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.542 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.542 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" HandleID="k8s-pod-network.d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Workload="localhost-k8s-whisker--cbcc9bc56--42xlw-eth0" Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.544 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.548390 containerd[1538]: 2025-09-05 23:56:41.546 [INFO][5775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4" Sep 5 23:56:41.548733 containerd[1538]: time="2025-09-05T23:56:41.548435065Z" level=info msg="TearDown network for sandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\" successfully" Sep 5 23:56:41.551363 containerd[1538]: time="2025-09-05T23:56:41.551308146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:41.551486 containerd[1538]: time="2025-09-05T23:56:41.551370106Z" level=info msg="RemovePodSandbox \"d6755d2ec7b8f4ff3eba79defc9216584af72b5e089c6d40577d5f82cd4561c4\" returns successfully" Sep 5 23:56:41.551807 containerd[1538]: time="2025-09-05T23:56:41.551783586Z" level=info msg="StopPodSandbox for \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\"" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.587 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0", GenerateName:"calico-kube-controllers-55df5c4477-", Namespace:"calico-system", SelfLink:"", UID:"604f5067-a6d0-4576-889f-1368c927e973", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55df5c4477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65", Pod:"calico-kube-controllers-55df5c4477-4fndx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d83e3534e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.587 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.587 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" iface="eth0" netns="" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.587 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.587 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.610 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.610 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.610 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.621 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.622 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.623 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.628297 containerd[1538]: 2025-09-05 23:56:41.626 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.628786 containerd[1538]: time="2025-09-05T23:56:41.628340320Z" level=info msg="TearDown network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\" successfully" Sep 5 23:56:41.628786 containerd[1538]: time="2025-09-05T23:56:41.628370480Z" level=info msg="StopPodSandbox for \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\" returns successfully" Sep 5 23:56:41.628842 containerd[1538]: time="2025-09-05T23:56:41.628777480Z" level=info msg="RemovePodSandbox for \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\"" Sep 5 23:56:41.628842 containerd[1538]: time="2025-09-05T23:56:41.628804720Z" level=info msg="Forcibly stopping sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\"" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.664 [WARNING][5828] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0", GenerateName:"calico-kube-controllers-55df5c4477-", Namespace:"calico-system", SelfLink:"", UID:"604f5067-a6d0-4576-889f-1368c927e973", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55df5c4477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9dc08b9efe6224ba85a8911726da1808986333633fa1173b59f5737d19227f65", Pod:"calico-kube-controllers-55df5c4477-4fndx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d83e3534e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.665 [INFO][5828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.665 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" iface="eth0" netns="" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.665 [INFO][5828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.665 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.685 [INFO][5837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.685 [INFO][5837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.685 [INFO][5837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.694 [WARNING][5837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.694 [INFO][5837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" HandleID="k8s-pod-network.65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Workload="localhost-k8s-calico--kube--controllers--55df5c4477--4fndx-eth0" Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.695 [INFO][5837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.699722 containerd[1538]: 2025-09-05 23:56:41.697 [INFO][5828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772" Sep 5 23:56:41.700158 containerd[1538]: time="2025-09-05T23:56:41.699763174Z" level=info msg="TearDown network for sandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\" successfully" Sep 5 23:56:41.702651 containerd[1538]: time="2025-09-05T23:56:41.702608214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:41.702701 containerd[1538]: time="2025-09-05T23:56:41.702681694Z" level=info msg="RemovePodSandbox \"65348f4b613fa885fa0c28abfdd45fedd18ffcd8e7a2f7342f9d908509d81772\" returns successfully" Sep 5 23:56:41.703399 containerd[1538]: time="2025-09-05T23:56:41.703365974Z" level=info msg="StopPodSandbox for \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\"" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.743 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--dnd4d-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d96f52bf-df90-4603-8a34-8bdf02a0c13f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0", Pod:"goldmane-7988f88666-dnd4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie77b0138297", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.743 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.743 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" iface="eth0" netns="" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.743 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.743 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.787 [INFO][5864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.787 [INFO][5864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.787 [INFO][5864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.796 [WARNING][5864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.796 [INFO][5864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.797 [INFO][5864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.801358 containerd[1538]: 2025-09-05 23:56:41.799 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.801773 containerd[1538]: time="2025-09-05T23:56:41.801395432Z" level=info msg="TearDown network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\" successfully" Sep 5 23:56:41.801773 containerd[1538]: time="2025-09-05T23:56:41.801420552Z" level=info msg="StopPodSandbox for \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\" returns successfully" Sep 5 23:56:41.801950 containerd[1538]: time="2025-09-05T23:56:41.801920712Z" level=info msg="RemovePodSandbox for \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\"" Sep 5 23:56:41.801995 containerd[1538]: time="2025-09-05T23:56:41.801977072Z" level=info msg="Forcibly stopping sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\"" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.836 [WARNING][5881] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--dnd4d-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d96f52bf-df90-4603-8a34-8bdf02a0c13f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b37dd0ce489d608dfaa90aff20b211fef1511ca037b603acc1387433763d4e0", Pod:"goldmane-7988f88666-dnd4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie77b0138297", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.836 [INFO][5881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.836 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" iface="eth0" netns="" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.836 [INFO][5881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.836 [INFO][5881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.857 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.858 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.858 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.867 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.867 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" HandleID="k8s-pod-network.d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Workload="localhost-k8s-goldmane--7988f88666--dnd4d-eth0" Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.868 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.872603 containerd[1538]: 2025-09-05 23:56:41.870 [INFO][5881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e" Sep 5 23:56:41.873108 containerd[1538]: time="2025-09-05T23:56:41.872642685Z" level=info msg="TearDown network for sandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\" successfully" Sep 5 23:56:41.888541 containerd[1538]: time="2025-09-05T23:56:41.888499768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:41.888682 containerd[1538]: time="2025-09-05T23:56:41.888581528Z" level=info msg="RemovePodSandbox \"d107ce51214abd8b53ed5398099dbab9c12e0598b48ff9ce71c56de05066e24e\" returns successfully" Sep 5 23:56:41.889136 containerd[1538]: time="2025-09-05T23:56:41.889107529Z" level=info msg="StopPodSandbox for \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\"" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.924 [WARNING][5908] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"882e113a-3ec9-4622-a5cc-bcbb76b3dde2", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87", Pod:"calico-apiserver-7847fb757b-mxf96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f18405429", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.924 [INFO][5908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.924 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" iface="eth0" netns="" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.924 [INFO][5908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.924 [INFO][5908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.944 [INFO][5916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.944 [INFO][5916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.944 [INFO][5916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.953 [WARNING][5916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.954 [INFO][5916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.955 [INFO][5916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:41.959799 containerd[1538]: 2025-09-05 23:56:41.957 [INFO][5908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:41.959799 containerd[1538]: time="2025-09-05T23:56:41.959770102Z" level=info msg="TearDown network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\" successfully" Sep 5 23:56:41.959799 containerd[1538]: time="2025-09-05T23:56:41.959799702Z" level=info msg="StopPodSandbox for \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\" returns successfully" Sep 5 23:56:41.961206 containerd[1538]: time="2025-09-05T23:56:41.961173342Z" level=info msg="RemovePodSandbox for \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\"" Sep 5 23:56:41.961281 containerd[1538]: time="2025-09-05T23:56:41.961213902Z" level=info msg="Forcibly stopping sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\"" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.001 [WARNING][5934] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0", GenerateName:"calico-apiserver-7847fb757b-", Namespace:"calico-apiserver", SelfLink:"", UID:"882e113a-3ec9-4622-a5cc-bcbb76b3dde2", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7847fb757b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9f42feb823d7e7f8d620ab2fbfef0f0a1221e9f9b6d327bf5a4d448d30d9c87", Pod:"calico-apiserver-7847fb757b-mxf96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f18405429", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.001 [INFO][5934] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.001 [INFO][5934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" iface="eth0" netns="" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.001 [INFO][5934] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.001 [INFO][5934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.021 [INFO][5942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.021 [INFO][5942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.021 [INFO][5942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.030 [WARNING][5942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.030 [INFO][5942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" HandleID="k8s-pod-network.a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Workload="localhost-k8s-calico--apiserver--7847fb757b--mxf96-eth0" Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.031 [INFO][5942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:42.035310 containerd[1538]: 2025-09-05 23:56:42.033 [INFO][5934] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45" Sep 5 23:56:42.035702 containerd[1538]: time="2025-09-05T23:56:42.035348715Z" level=info msg="TearDown network for sandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\" successfully" Sep 5 23:56:42.038183 containerd[1538]: time="2025-09-05T23:56:42.038146836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:42.038240 containerd[1538]: time="2025-09-05T23:56:42.038211636Z" level=info msg="RemovePodSandbox \"a68be3399feff31fe0f2815fac917527fe84bd6dd8d4d2d6c7279884359f6b45\" returns successfully" Sep 5 23:56:42.038673 containerd[1538]: time="2025-09-05T23:56:42.038648076Z" level=info msg="StopPodSandbox for \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\"" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.075 [WARNING][5960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e", Pod:"coredns-7c65d6cfc9-q8wpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali005e1591508", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.075 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.075 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" iface="eth0" netns="" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.075 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.076 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.100 [INFO][5969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.100 [INFO][5969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.100 [INFO][5969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.109 [WARNING][5969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.109 [INFO][5969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.111 [INFO][5969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:42.114973 containerd[1538]: 2025-09-05 23:56:42.113 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.116155 containerd[1538]: time="2025-09-05T23:56:42.115016249Z" level=info msg="TearDown network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\" successfully" Sep 5 23:56:42.116155 containerd[1538]: time="2025-09-05T23:56:42.115041329Z" level=info msg="StopPodSandbox for \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\" returns successfully" Sep 5 23:56:42.116245 containerd[1538]: time="2025-09-05T23:56:42.116213769Z" level=info msg="RemovePodSandbox for \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\"" Sep 5 23:56:42.116269 containerd[1538]: time="2025-09-05T23:56:42.116257529Z" level=info msg="Forcibly stopping sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\"" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.149 [WARNING][5987] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a5e4a42-edd8-4334-8fc9-cc5dbb79316a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db0d2e9a9e4ed0c537a58a43d8958d6a86d5351a899653884fd157d8bf012c5e", Pod:"coredns-7c65d6cfc9-q8wpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali005e1591508", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.149 [INFO][5987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.149 [INFO][5987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" iface="eth0" netns="" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.149 [INFO][5987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.149 [INFO][5987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.168 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.168 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.168 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.176 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.176 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" HandleID="k8s-pod-network.1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Workload="localhost-k8s-coredns--7c65d6cfc9--q8wpj-eth0" Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.177 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:42.181523 containerd[1538]: 2025-09-05 23:56:42.179 [INFO][5987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef" Sep 5 23:56:42.181928 containerd[1538]: time="2025-09-05T23:56:42.181559381Z" level=info msg="TearDown network for sandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\" successfully" Sep 5 23:56:42.187670 containerd[1538]: time="2025-09-05T23:56:42.187640502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:42.187735 containerd[1538]: time="2025-09-05T23:56:42.187703222Z" level=info msg="RemovePodSandbox \"1d3813e08eeaa0384f035b79cdaafd4fbe2be32448ec54cb8c77171f39b90cef\" returns successfully" Sep 5 23:56:42.188307 containerd[1538]: time="2025-09-05T23:56:42.188286582Z" level=info msg="StopPodSandbox for \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\"" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.220 [WARNING][6012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80ac692e-415b-4419-9e3c-f3f01df822d0", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f", Pod:"coredns-7c65d6cfc9-xp2hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib99e8adca02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.220 [INFO][6012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.220 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" iface="eth0" netns="" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.220 [INFO][6012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.220 [INFO][6012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.238 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.238 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.238 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.246 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.246 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.247 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:42.251393 containerd[1538]: 2025-09-05 23:56:42.249 [INFO][6012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.251393 containerd[1538]: time="2025-09-05T23:56:42.251372233Z" level=info msg="TearDown network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\" successfully" Sep 5 23:56:42.251798 containerd[1538]: time="2025-09-05T23:56:42.251402073Z" level=info msg="StopPodSandbox for \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\" returns successfully" Sep 5 23:56:42.251998 containerd[1538]: time="2025-09-05T23:56:42.251866433Z" level=info msg="RemovePodSandbox for \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\"" Sep 5 23:56:42.252042 containerd[1538]: time="2025-09-05T23:56:42.252008193Z" level=info msg="Forcibly stopping sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\"" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.286 [WARNING][6038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80ac692e-415b-4419-9e3c-f3f01df822d0", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62f7136205274c6cfa6ebf1c70eacf5c69d63237ffe19b2ca398b4d1a0e6046f", Pod:"coredns-7c65d6cfc9-xp2hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib99e8adca02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.286 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.286 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" iface="eth0" netns="" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.286 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.286 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.303 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.304 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.304 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.312 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.312 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" HandleID="k8s-pod-network.ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Workload="localhost-k8s-coredns--7c65d6cfc9--xp2hz-eth0" Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.313 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:42.316911 containerd[1538]: 2025-09-05 23:56:42.315 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1" Sep 5 23:56:42.317336 containerd[1538]: time="2025-09-05T23:56:42.316971364Z" level=info msg="TearDown network for sandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\" successfully" Sep 5 23:56:42.319842 containerd[1538]: time="2025-09-05T23:56:42.319814325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:42.319896 containerd[1538]: time="2025-09-05T23:56:42.319873725Z" level=info msg="RemovePodSandbox \"ca070596be7d4c4f445069cf588d58a08e06a77e062116ef7aacda7d3e00e6c1\" returns successfully" Sep 5 23:56:42.875209 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:43600.service - OpenSSH per-connection server daemon (10.0.0.1:43600). Sep 5 23:56:42.915342 sshd[6053]: Accepted publickey for core from 10.0.0.1 port 43600 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:42.917495 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:42.920834 systemd-logind[1510]: New session 13 of user core. Sep 5 23:56:42.932276 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:56:43.142493 sshd[6053]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:43.149188 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:43610.service - OpenSSH per-connection server daemon (10.0.0.1:43610). Sep 5 23:56:43.149579 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:43600.service: Deactivated successfully. Sep 5 23:56:43.152668 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:56:43.152720 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:56:43.153768 systemd-logind[1510]: Removed session 13. Sep 5 23:56:43.182000 sshd[6065]: Accepted publickey for core from 10.0.0.1 port 43610 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:43.183231 sshd[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:43.186810 systemd-logind[1510]: New session 14 of user core. Sep 5 23:56:43.195221 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:56:43.393442 sshd[6065]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:43.402234 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:43626.service - OpenSSH per-connection server daemon (10.0.0.1:43626). Sep 5 23:56:43.402640 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:43610.service: Deactivated successfully. Sep 5 23:56:43.405528 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:56:43.405748 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:56:43.409316 systemd-logind[1510]: Removed session 14. Sep 5 23:56:43.442920 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 43626 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:43.444436 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:43.449064 systemd-logind[1510]: New session 15 of user core. Sep 5 23:56:43.460294 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:56:45.001749 sshd[6078]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:45.011298 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:43638.service - OpenSSH per-connection server daemon (10.0.0.1:43638). Sep 5 23:56:45.011707 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:43626.service: Deactivated successfully. Sep 5 23:56:45.023110 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:56:45.030711 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:56:45.033186 systemd-logind[1510]: Removed session 15. Sep 5 23:56:45.060698 sshd[6097]: Accepted publickey for core from 10.0.0.1 port 43638 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:45.062339 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:45.066625 systemd-logind[1510]: New session 16 of user core. Sep 5 23:56:45.077375 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:56:45.603182 sshd[6097]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:45.617297 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:43652.service - OpenSSH per-connection server daemon (10.0.0.1:43652). Sep 5 23:56:45.617768 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:43638.service: Deactivated successfully. Sep 5 23:56:45.619437 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:56:45.622537 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:56:45.623518 systemd-logind[1510]: Removed session 16. Sep 5 23:56:45.657885 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 43652 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:45.659656 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:45.664213 systemd-logind[1510]: New session 17 of user core. Sep 5 23:56:45.670249 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:56:45.797536 sshd[6112]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:45.800875 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:43652.service: Deactivated successfully. Sep 5 23:56:45.803046 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:56:45.803066 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:56:45.803954 systemd-logind[1510]: Removed session 17. Sep 5 23:56:50.814275 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:45030.service - OpenSSH per-connection server daemon (10.0.0.1:45030). Sep 5 23:56:50.847803 sshd[6143]: Accepted publickey for core from 10.0.0.1 port 45030 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:50.849781 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:50.855276 systemd-logind[1510]: New session 18 of user core. Sep 5 23:56:50.861235 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:56:51.048747 sshd[6143]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:51.054387 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:45030.service: Deactivated successfully. Sep 5 23:56:51.058944 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:56:51.058990 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:56:51.066813 systemd-logind[1510]: Removed session 18. Sep 5 23:56:56.060266 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:45034.service - OpenSSH per-connection server daemon (10.0.0.1:45034). Sep 5 23:56:56.117990 sshd[6159]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:56:56.118983 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:56.125212 systemd-logind[1510]: New session 19 of user core. Sep 5 23:56:56.131374 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:56:56.432898 sshd[6159]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:56.436668 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:45034.service: Deactivated successfully. Sep 5 23:56:56.438886 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:56:56.438991 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:56:56.439916 systemd-logind[1510]: Removed session 19. Sep 5 23:56:58.085413 kubelet[2607]: E0905 23:56:58.085325 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:57:00.579882 kubelet[2607]: I0905 23:57:00.579832 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:57:01.446277 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:35790.service - OpenSSH per-connection server daemon (10.0.0.1:35790). Sep 5 23:57:01.497906 sshd[6245]: Accepted publickey for core from 10.0.0.1 port 35790 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:57:01.499552 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:01.505369 systemd-logind[1510]: New session 20 of user core. Sep 5 23:57:01.517316 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:57:01.712482 sshd[6245]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:01.716548 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:57:01.717612 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:35790.service: Deactivated successfully. Sep 5 23:57:01.719873 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:57:01.721512 systemd-logind[1510]: Removed session 20.