Sep 12 17:33:57.904071 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:33:57.904092 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:57.904103 kernel: BIOS-provided physical RAM map: Sep 12 17:33:57.904110 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:33:57.904116 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 17:33:57.904122 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 17:33:57.904129 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 17:33:57.904135 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 17:33:57.904141 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 12 17:33:57.904147 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 12 17:33:57.904156 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 12 17:33:57.904162 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 12 17:33:57.904168 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 12 17:33:57.904181 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 12 17:33:57.904190 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 12 17:33:57.904196 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 17:33:57.904205 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 12 17:33:57.904212 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 12 17:33:57.904219 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 17:33:57.904225 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:33:57.904232 kernel: NX (Execute Disable) protection: active Sep 12 17:33:57.904238 kernel: APIC: Static calls initialized Sep 12 17:33:57.904245 kernel: efi: EFI v2.7 by EDK II Sep 12 17:33:57.904251 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Sep 12 17:33:57.904258 kernel: SMBIOS 2.8 present. Sep 12 17:33:57.904265 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 12 17:33:57.904271 kernel: Hypervisor detected: KVM Sep 12 17:33:57.904280 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:33:57.904289 kernel: kvm-clock: using sched offset of 5577814334 cycles Sep 12 17:33:57.904296 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:33:57.904303 kernel: tsc: Detected 2794.748 MHz processor Sep 12 17:33:57.904310 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:33:57.904317 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:33:57.904324 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 12 17:33:57.904330 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:33:57.904337 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:33:57.904347 kernel: Using GB pages for direct mapping Sep 12 17:33:57.904353 kernel: Secure boot disabled Sep 12 17:33:57.904360 kernel: ACPI: Early table checksum verification disabled Sep 12 17:33:57.904367 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 17:33:57.904377 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:33:57.904384 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:57.904391 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:57.904400 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 17:33:57.904407 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:57.904414 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:57.904421 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:57.904428 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:57.904438 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:33:57.904445 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 17:33:57.904454 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 17:33:57.904463 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 17:33:57.904471 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 17:33:57.904477 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 17:33:57.904484 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 17:33:57.904491 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 17:33:57.904498 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 17:33:57.904505 kernel: No NUMA configuration found Sep 12 17:33:57.904512 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 12 17:33:57.904519 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 12 17:33:57.904528 kernel: Zone ranges: Sep 12 17:33:57.904535 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:33:57.904542 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 12 17:33:57.904549 kernel: Normal empty Sep 12 17:33:57.904556 kernel: Movable zone start for each node Sep 12 17:33:57.904563 kernel: Early memory node ranges Sep 12 17:33:57.904569 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:33:57.904576 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 17:33:57.904583 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 17:33:57.904592 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 12 17:33:57.904599 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 12 17:33:57.904606 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 12 17:33:57.904613 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 12 17:33:57.904620 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:33:57.904627 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:33:57.904633 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 17:33:57.904640 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:33:57.904647 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 12 17:33:57.904656 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:33:57.904663 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 12 17:33:57.904670 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:33:57.904677 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:33:57.904683 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:33:57.904690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:33:57.904697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:33:57.904704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:33:57.904711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:33:57.904718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:33:57.904727 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:33:57.904734 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:33:57.904741 kernel: TSC deadline timer available Sep 12 17:33:57.904748 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:33:57.904755 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:33:57.904761 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:33:57.904768 kernel: kvm-guest: setup PV sched yield Sep 12 17:33:57.904775 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 17:33:57.904792 kernel: Booting paravirtualized kernel on KVM Sep 12 17:33:57.904802 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:33:57.904809 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:33:57.904816 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:33:57.904823 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:33:57.904830 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:33:57.904837 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:33:57.904843 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:33:57.904852 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:57.904862 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:33:57.904869 kernel: random: crng init done Sep 12 17:33:57.904876 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:33:57.904883 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:33:57.904890 kernel: Fallback order for Node 0: 0 Sep 12 17:33:57.904897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 12 17:33:57.904904 kernel: Policy zone: DMA32 Sep 12 17:33:57.904911 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:33:57.904918 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 171124K reserved, 0K cma-reserved) Sep 12 17:33:57.904927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:33:57.904934 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:33:57.904941 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:33:57.904948 kernel: Dynamic Preempt: voluntary Sep 12 17:33:57.904962 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:33:57.904976 kernel: rcu: RCU event tracing is enabled. Sep 12 17:33:57.904984 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:33:57.904991 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:33:57.904999 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:33:57.905006 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:33:57.905013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:33:57.905020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:33:57.905030 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:33:57.905037 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:33:57.905044 kernel: Console: colour dummy device 80x25 Sep 12 17:33:57.905052 kernel: printk: console [ttyS0] enabled Sep 12 17:33:57.905059 kernel: ACPI: Core revision 20230628 Sep 12 17:33:57.905068 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:33:57.905076 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:33:57.905083 kernel: x2apic enabled Sep 12 17:33:57.905090 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:33:57.905097 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:33:57.905105 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:33:57.905112 kernel: kvm-guest: setup PV IPIs Sep 12 17:33:57.905119 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:33:57.905126 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:33:57.905136 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 17:33:57.905143 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:33:57.905150 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:33:57.905158 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:33:57.905165 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:33:57.905172 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:33:57.905185 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:33:57.905193 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:33:57.905200 kernel: active return thunk: retbleed_return_thunk Sep 12 17:33:57.905209 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:33:57.905217 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:33:57.905224 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:33:57.905231 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:33:57.905239 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:33:57.905246 kernel: active return thunk: srso_return_thunk Sep 12 17:33:57.905254 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:33:57.905261 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:33:57.905270 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:33:57.905277 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:33:57.905285 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:33:57.905292 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:33:57.905299 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:33:57.905306 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:33:57.905314 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:33:57.905321 kernel: landlock: Up and running. Sep 12 17:33:57.905328 kernel: SELinux: Initializing. Sep 12 17:33:57.905339 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:33:57.905347 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:33:57.905355 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:33:57.905362 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:33:57.905372 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:33:57.905380 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:33:57.905388 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:33:57.905396 kernel: ... version: 0 Sep 12 17:33:57.905404 kernel: ... bit width: 48 Sep 12 17:33:57.905413 kernel: ... generic registers: 6 Sep 12 17:33:57.905420 kernel: ... value mask: 0000ffffffffffff Sep 12 17:33:57.905428 kernel: ... max period: 00007fffffffffff Sep 12 17:33:57.905435 kernel: ... fixed-purpose events: 0 Sep 12 17:33:57.905442 kernel: ... event mask: 000000000000003f Sep 12 17:33:57.905449 kernel: signal: max sigframe size: 1776 Sep 12 17:33:57.905456 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:33:57.905464 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:33:57.905471 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:33:57.905480 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:33:57.905487 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:33:57.905495 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:33:57.905502 kernel: smpboot: Max logical packages: 1 Sep 12 17:33:57.905509 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 17:33:57.905516 kernel: devtmpfs: initialized Sep 12 17:33:57.905523 kernel: x86/mm: Memory block size: 128MB Sep 12 17:33:57.905531 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 17:33:57.905538 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 17:33:57.905548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 12 17:33:57.905556 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 17:33:57.905563 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 17:33:57.905570 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:33:57.905577 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:33:57.905585 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:33:57.905592 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:33:57.905599 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:33:57.905607 kernel: audit: type=2000 audit(1757698437.333:1): state=initialized audit_enabled=0 res=1 Sep 12 17:33:57.905616 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:33:57.905623 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:33:57.905631 kernel: cpuidle: using governor menu Sep 12 17:33:57.905638 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:33:57.905645 kernel: dca service started, version 1.12.1 Sep 12 17:33:57.905653 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:33:57.905660 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 17:33:57.905667 kernel: PCI: Using configuration type 1 for base access Sep 12 17:33:57.905675 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:33:57.905685 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:33:57.905692 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:33:57.905699 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:33:57.905706 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:33:57.905714 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:33:57.905721 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:33:57.905728 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:33:57.905735 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:33:57.905742 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:33:57.905752 kernel: ACPI: Interpreter enabled Sep 12 17:33:57.905759 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:33:57.905767 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:33:57.905774 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:33:57.905792 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:33:57.905800 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:33:57.905807 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:33:57.905987 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:33:57.906126 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:33:57.906260 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:33:57.906271 kernel: PCI host bridge to bus 0000:00 Sep 12 17:33:57.906399 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:33:57.906519 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:33:57.906634 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:33:57.906746 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 17:33:57.906888 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:33:57.907003 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 12 17:33:57.907115 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:33:57.907267 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:33:57.907400 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:33:57.907530 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 12 17:33:57.907683 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 12 17:33:57.907843 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:33:57.907979 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 12 17:33:57.908122 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:33:57.908292 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:33:57.908442 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 12 17:33:57.908587 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 12 17:33:57.908737 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 12 17:33:57.908913 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:33:57.909060 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 12 17:33:57.909218 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 12 17:33:57.909367 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 12 17:33:57.909520 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:33:57.909668 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 12 17:33:57.909869 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 12 17:33:57.910014 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 12 17:33:57.910153 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 12 17:33:57.910317 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:33:57.910460 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:33:57.910614 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:33:57.910757 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 12 17:33:57.910942 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 12 17:33:57.911096 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:33:57.911242 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 12 17:33:57.911257 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:33:57.911268 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:33:57.911280 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:33:57.911290 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:33:57.911308 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:33:57.911318 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:33:57.911325 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:33:57.911332 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:33:57.911340 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:33:57.911347 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:33:57.911354 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:33:57.911361 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:33:57.911369 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:33:57.911379 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:33:57.911386 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:33:57.911393 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:33:57.911400 kernel: iommu: Default domain type: Translated Sep 12 17:33:57.911407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:33:57.911415 kernel: efivars: Registered efivars operations Sep 12 17:33:57.911422 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:33:57.911429 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:33:57.911436 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 17:33:57.911443 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 12 17:33:57.911453 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 12 17:33:57.911461 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 12 17:33:57.911607 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:33:57.911750 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:33:57.911906 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:33:57.911921 kernel: vgaarb: loaded Sep 12 17:33:57.911929 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:33:57.911936 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:33:57.911948 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:33:57.911955 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:33:57.911963 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:33:57.911970 kernel: pnp: PnP ACPI init Sep 12 17:33:57.912126 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:33:57.912138 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:33:57.912146 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:33:57.912153 kernel: NET: Registered PF_INET protocol family Sep 12 17:33:57.912165 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:33:57.912172 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:33:57.912187 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:33:57.912196 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:33:57.912203 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:33:57.912211 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:33:57.912218 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:33:57.912225 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:33:57.912233 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:33:57.912242 kernel: NET: Registered PF_XDP protocol family Sep 12 17:33:57.912388 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 12 17:33:57.912531 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 12 17:33:57.912658 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:33:57.912838 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:33:57.912969 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:33:57.913112 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 17:33:57.913288 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:33:57.913422 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 12 17:33:57.913435 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:33:57.913446 kernel: Initialise system trusted keyrings Sep 12 17:33:57.913455 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:33:57.913464 kernel: Key type asymmetric registered Sep 12 17:33:57.913474 kernel: Asymmetric key parser 'x509' registered Sep 12 17:33:57.913484 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:33:57.913493 kernel: io scheduler mq-deadline registered Sep 12 17:33:57.913500 kernel: io scheduler kyber registered Sep 12 17:33:57.913511 kernel: io scheduler bfq registered Sep 12 17:33:57.913519 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:33:57.913527 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:33:57.913534 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:33:57.913542 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:33:57.913549 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:33:57.913556 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:33:57.913564 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:33:57.913571 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:33:57.913581 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:33:57.913735 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:33:57.913747 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:33:57.913899 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:33:57.914023 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:33:57 UTC (1757698437) Sep 12 17:33:57.914158 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:33:57.914170 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:33:57.914186 kernel: efifb: probing for efifb Sep 12 17:33:57.914199 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 12 17:33:57.914206 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 12 17:33:57.914214 kernel: efifb: scrolling: redraw Sep 12 17:33:57.914222 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 12 17:33:57.914233 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:33:57.914264 kernel: fb0: EFI VGA frame buffer device Sep 12 17:33:57.914277 kernel: pstore: Using crash dump compression: deflate Sep 12 17:33:57.914288 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:33:57.914298 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:33:57.914310 kernel: Segment Routing with IPv6 Sep 12 17:33:57.914318 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:33:57.914325 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:33:57.914333 kernel: Key type dns_resolver registered Sep 12 17:33:57.914340 kernel: IPI shorthand broadcast: enabled Sep 12 17:33:57.914348 kernel: sched_clock: Marking stable (632004699, 109543135)->(757327289, -15779455) Sep 12 17:33:57.914355 kernel: registered taskstats version 1 Sep 12 17:33:57.914362 kernel: Loading compiled-in X.509 certificates Sep 12 17:33:57.914370 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:33:57.914380 kernel: Key type .fscrypt registered Sep 12 17:33:57.914387 kernel: Key type fscrypt-provisioning registered Sep 12 17:33:57.914395 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:33:57.914403 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:33:57.914410 kernel: ima: No architecture policies found Sep 12 17:33:57.914432 kernel: clk: Disabling unused clocks Sep 12 17:33:57.914458 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:33:57.914468 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:33:57.914497 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:33:57.914506 kernel: Run /init as init process Sep 12 17:33:57.914513 kernel: with arguments: Sep 12 17:33:57.914520 kernel: /init Sep 12 17:33:57.914528 kernel: with environment: Sep 12 17:33:57.914535 kernel: HOME=/ Sep 12 17:33:57.914543 kernel: TERM=linux Sep 12 17:33:57.914550 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:33:57.914560 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:33:57.914573 systemd[1]: Detected virtualization kvm. Sep 12 17:33:57.914582 systemd[1]: Detected architecture x86-64. Sep 12 17:33:57.914590 systemd[1]: Running in initrd. Sep 12 17:33:57.914600 systemd[1]: No hostname configured, using default hostname. Sep 12 17:33:57.914610 systemd[1]: Hostname set to . Sep 12 17:33:57.914619 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:33:57.914630 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:33:57.914641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:33:57.914651 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:33:57.914661 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:33:57.914672 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:33:57.914684 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:33:57.914696 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:33:57.914706 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:33:57.914717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:33:57.914725 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:33:57.914733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:57.914741 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:33:57.914749 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:33:57.914759 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:33:57.914767 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:33:57.914775 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:33:57.914797 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:33:57.914805 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:33:57.914813 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:33:57.914824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:33:57.914836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:33:57.914847 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:33:57.914860 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:33:57.914872 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:33:57.914883 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:33:57.914891 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:33:57.914899 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:33:57.914907 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:33:57.914915 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:33:57.914923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:57.914934 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:33:57.914942 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:33:57.914950 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:33:57.914979 systemd-journald[193]: Collecting audit messages is disabled. Sep 12 17:33:57.915001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:33:57.915010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:57.915021 systemd-journald[193]: Journal started Sep 12 17:33:57.915046 systemd-journald[193]: Runtime Journal (/run/log/journal/6b7b63e583f34f539727165aac8c09f6) is 6.0M, max 48.3M, 42.2M free. Sep 12 17:33:57.917157 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 17:33:57.918691 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:57.918713 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:33:57.921273 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:33:57.925153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:33:57.926277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:33:57.936869 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:33:57.940253 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:57.943343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:57.996094 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:33:58.005810 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:33:58.008481 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 17:33:58.009700 kernel: Bridge firewalling registered Sep 12 17:33:58.010488 dracut-cmdline[221]: dracut-dracut-053 Sep 12 17:33:58.011679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:33:58.014306 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:58.026936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:33:58.038512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:33:58.042169 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:33:58.083869 systemd-resolved[262]: Positive Trust Anchors: Sep 12 17:33:58.083885 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:33:58.083921 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:33:58.086867 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 12 17:33:58.088074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:33:58.093807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:58.124818 kernel: SCSI subsystem initialized Sep 12 17:33:58.134820 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:33:58.145852 kernel: iscsi: registered transport (tcp) Sep 12 17:33:58.170836 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:33:58.170908 kernel: QLogic iSCSI HBA Driver Sep 12 17:33:58.224620 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:33:58.229053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:33:58.259753 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:33:58.259823 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:33:58.259854 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:33:58.302805 kernel: raid6: avx2x4 gen() 30152 MB/s Sep 12 17:33:58.319799 kernel: raid6: avx2x2 gen() 30637 MB/s Sep 12 17:33:58.336838 kernel: raid6: avx2x1 gen() 25863 MB/s Sep 12 17:33:58.336856 kernel: raid6: using algorithm avx2x2 gen() 30637 MB/s Sep 12 17:33:58.354841 kernel: raid6: .... xor() 19488 MB/s, rmw enabled Sep 12 17:33:58.354866 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:33:58.374804 kernel: xor: automatically using best checksumming function avx Sep 12 17:33:58.528816 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:33:58.541424 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:33:58.557031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:58.570994 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 12 17:33:58.582101 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:58.586989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:33:58.602445 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Sep 12 17:33:58.635939 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:33:58.652946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:33:58.723429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:33:58.734943 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:33:58.751448 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:33:58.755306 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:33:58.757985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:33:58.763414 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:33:58.772062 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:33:58.760162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:33:58.818426 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:33:58.818655 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:33:58.818680 kernel: GPT:9289727 != 19775487 Sep 12 17:33:58.818694 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:33:58.818715 kernel: GPT:9289727 != 19775487 Sep 12 17:33:58.818727 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:33:58.818741 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:58.788693 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:33:58.816391 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:33:58.816606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:58.851176 kernel: libata version 3.00 loaded. Sep 12 17:33:58.817501 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:58.818985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:58.819198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:58.825888 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:58.857135 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:33:58.857183 kernel: AES CTR mode by8 optimization enabled Sep 12 17:33:58.859814 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:33:58.860057 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:33:58.861808 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:33:58.862025 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:33:58.866999 kernel: scsi host0: ahci Sep 12 17:33:58.867250 kernel: scsi host1: ahci Sep 12 17:33:58.880880 kernel: scsi host2: ahci Sep 12 17:33:58.881118 kernel: scsi host3: ahci Sep 12 17:33:58.881224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:58.882419 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:33:58.886803 kernel: scsi host4: ahci Sep 12 17:33:58.919614 kernel: scsi host5: ahci Sep 12 17:33:58.919864 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 12 17:33:58.919879 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 12 17:33:58.919892 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 12 17:33:58.919905 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 12 17:33:58.919915 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 12 17:33:58.919927 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 12 17:33:58.943085 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:33:58.952512 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Sep 12 17:33:58.952533 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (472) Sep 12 17:33:58.948840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:58.949028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:58.958037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:33:58.963442 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:33:58.964756 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:33:58.971693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:33:58.978061 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:33:59.036620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:59.054531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:59.072256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:59.097181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:59.148700 disk-uuid[561]: Primary Header is updated. Sep 12 17:33:59.148700 disk-uuid[561]: Secondary Entries is updated. Sep 12 17:33:59.148700 disk-uuid[561]: Secondary Header is updated. Sep 12 17:33:59.192970 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:59.193000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:59.198805 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:59.199807 kernel: block device autoloading is deprecated and will be removed. Sep 12 17:33:59.229436 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:33:59.229481 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:33:59.229492 kernel: ata3.00: applying bridge limits Sep 12 17:33:59.230812 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:59.231806 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:59.232815 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:59.234210 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:59.234231 kernel: ata3.00: configured for UDMA/100 Sep 12 17:33:59.235452 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:33:59.239077 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:33:59.281045 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:33:59.281429 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:33:59.295813 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:34:00.199673 disk-uuid[575]: The operation has completed successfully. Sep 12 17:34:00.201272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:00.229327 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:34:00.229499 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:34:00.264929 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:34:00.269258 sh[604]: Success Sep 12 17:34:00.282812 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:34:00.317445 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:34:00.336912 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:34:00.339666 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:34:00.350615 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:34:00.350644 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:00.350655 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:34:00.352509 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:34:00.352523 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:34:00.358451 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:34:00.359340 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:34:00.366957 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:34:00.369481 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:34:00.379509 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:00.379542 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:00.379553 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:00.382835 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:00.393876 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:34:00.396222 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:00.409116 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:34:00.413963 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:34:00.483458 ignition[694]: Ignition 2.19.0 Sep 12 17:34:00.483470 ignition[694]: Stage: fetch-offline Sep 12 17:34:00.483532 ignition[694]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:00.483544 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:00.483646 ignition[694]: parsed url from cmdline: "" Sep 12 17:34:00.483651 ignition[694]: no config URL provided Sep 12 17:34:00.483658 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:34:00.483668 ignition[694]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:34:00.483707 ignition[694]: op(1): [started] loading QEMU firmware config module Sep 12 17:34:00.483714 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:34:00.494488 ignition[694]: op(1): [finished] loading QEMU firmware config module Sep 12 17:34:00.523304 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:00.531930 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:00.536353 ignition[694]: parsing config with SHA512: 069bcc6035b177370b3d32e2899901e486d3d5038f67b2643f6629ed7beba41182527611279fc0f2cee7f30b7d80b8baaf4019b2fdda6af82dce1c7c9378eac0 Sep 12 17:34:00.545577 unknown[694]: fetched base config from "system" Sep 12 17:34:00.545589 unknown[694]: fetched user config from "qemu" Sep 12 17:34:00.546059 ignition[694]: fetch-offline: fetch-offline passed Sep 12 17:34:00.549960 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:00.546136 ignition[694]: Ignition finished successfully Sep 12 17:34:00.559431 systemd-networkd[793]: lo: Link UP Sep 12 17:34:00.559440 systemd-networkd[793]: lo: Gained carrier Sep 12 17:34:00.561077 systemd-networkd[793]: Enumeration completed Sep 12 17:34:00.561177 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:00.561579 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:00.561584 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:00.562625 systemd-networkd[793]: eth0: Link UP Sep 12 17:34:00.562630 systemd-networkd[793]: eth0: Gained carrier Sep 12 17:34:00.562638 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:00.563760 systemd[1]: Reached target network.target - Network. Sep 12 17:34:00.565914 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:34:00.572186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:34:00.579823 systemd-networkd[793]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:34:00.590183 ignition[796]: Ignition 2.19.0 Sep 12 17:34:00.590202 ignition[796]: Stage: kargs Sep 12 17:34:00.590503 ignition[796]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:00.590518 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:00.593652 ignition[796]: kargs: kargs passed Sep 12 17:34:00.593713 ignition[796]: Ignition finished successfully Sep 12 17:34:00.597340 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:34:00.615946 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:34:00.629973 ignition[806]: Ignition 2.19.0 Sep 12 17:34:00.629990 ignition[806]: Stage: disks Sep 12 17:34:00.630205 ignition[806]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:00.630220 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:00.634270 ignition[806]: disks: disks passed Sep 12 17:34:00.634333 ignition[806]: Ignition finished successfully Sep 12 17:34:00.637588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:34:00.640198 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:00.640702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:34:00.641232 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:00.641590 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:00.642295 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:00.656082 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:34:00.667768 systemd-fsck[817]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:34:00.675627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:34:00.690860 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:34:00.846798 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:34:00.847358 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:34:00.849733 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:34:00.861897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:00.864691 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:34:00.867353 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:34:00.867409 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:34:00.876586 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (826) Sep 12 17:34:00.876614 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:00.876629 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:00.876642 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:00.867436 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:00.879045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:34:00.881185 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:00.882477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:00.899929 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:34:00.939615 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:34:00.945433 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:34:00.951483 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:34:00.956349 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:34:01.059601 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:01.072861 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:34:01.074645 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:34:01.083812 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:01.103186 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:34:01.170926 ignition[940]: INFO : Ignition 2.19.0 Sep 12 17:34:01.170926 ignition[940]: INFO : Stage: mount Sep 12 17:34:01.172893 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:01.172893 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:01.176128 ignition[940]: INFO : mount: mount passed Sep 12 17:34:01.176899 ignition[940]: INFO : Ignition finished successfully Sep 12 17:34:01.180485 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:34:01.195863 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:34:01.349721 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:34:01.369092 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:01.395533 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (953) Sep 12 17:34:01.395569 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:01.395584 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:01.397146 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:01.399808 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:01.401725 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:01.441935 ignition[971]: INFO : Ignition 2.19.0 Sep 12 17:34:01.441935 ignition[971]: INFO : Stage: files Sep 12 17:34:01.443990 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:01.443990 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:01.443990 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:34:01.443990 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:34:01.443990 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:34:01.451476 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:34:01.451476 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:34:01.451476 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:34:01.451476 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:34:01.451476 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:34:01.451476 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:01.451476 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:34:01.447025 unknown[971]: wrote ssh authorized keys file for user: core Sep 12 17:34:01.536020 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:34:01.692972 systemd-networkd[793]: eth0: Gained IPv6LL Sep 12 17:34:02.219242 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:02.221178 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:02.222879 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:02.224532 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:02.226318 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:02.227936 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:02.229633 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:02.231258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:02.233106 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:02.235352 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:02.237512 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:02.239230 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:02.241666 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:02.243999 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:02.246161 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:34:02.690210 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:34:03.334691 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:03.334691 ignition[971]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 12 17:34:03.338553 ignition[971]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:34:03.369914 ignition[971]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:34:03.377693 ignition[971]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:34:03.379499 ignition[971]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:34:03.379499 ignition[971]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:03.379499 ignition[971]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:03.379499 ignition[971]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:03.379499 ignition[971]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:03.379499 ignition[971]: INFO : files: files passed Sep 12 17:34:03.379499 ignition[971]: INFO : Ignition finished successfully Sep 12 17:34:03.382053 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:34:03.399006 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:34:03.402599 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:34:03.408072 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:34:03.409241 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:34:03.417057 initrd-setup-root-after-ignition[998]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:34:03.421958 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:03.421958 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:03.425670 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:03.425337 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:03.427904 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:34:03.440967 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:34:03.471352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:34:03.471483 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:34:03.473171 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:34:03.475010 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:34:03.476143 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:34:03.487953 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:34:03.503361 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:03.506050 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:34:03.520132 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:03.521396 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:03.521708 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:34:03.522034 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:34:03.522141 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:03.522858 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:34:03.523172 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:34:03.523490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:34:03.523831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:03.524157 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:03.524472 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:34:03.524812 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:34:03.525162 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:34:03.525478 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:34:03.525808 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:34:03.526097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:34:03.526201 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:34:03.526763 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:03.527097 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:03.527385 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:34:03.527513 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:03.528181 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:34:03.528285 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:34:03.528746 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:34:03.528877 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:03.529394 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:34:03.529669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:34:03.532830 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:03.533240 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:34:03.533617 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:34:03.533834 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:34:03.533928 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:34:03.534242 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:34:03.534329 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:34:03.534818 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:34:03.534925 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:03.535402 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:34:03.535501 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:34:03.537037 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:34:03.538171 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:34:03.538455 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:34:03.538560 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:03.539196 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:34:03.539321 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:34:03.543552 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:34:03.543660 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:34:03.571914 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:34:03.611167 ignition[1024]: INFO : Ignition 2.19.0 Sep 12 17:34:03.611167 ignition[1024]: INFO : Stage: umount Sep 12 17:34:03.611167 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:03.611167 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:03.611167 ignition[1024]: INFO : umount: umount passed Sep 12 17:34:03.611167 ignition[1024]: INFO : Ignition finished successfully Sep 12 17:34:03.613106 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:34:03.613252 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:34:03.615868 systemd[1]: Stopped target network.target - Network. Sep 12 17:34:03.616978 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:34:03.617065 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:34:03.618798 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:34:03.618850 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:34:03.620610 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:34:03.620676 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:34:03.622584 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:34:03.622651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:34:03.624821 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:34:03.626667 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:34:03.627815 systemd-networkd[793]: eth0: DHCPv6 lease lost Sep 12 17:34:03.630553 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:34:03.630693 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:34:03.633192 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:34:03.633248 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:03.648074 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:34:03.649111 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:34:03.649197 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:03.651575 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:03.654116 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:34:03.654285 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:34:03.660564 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:34:03.660647 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:03.662595 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:34:03.662652 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:03.664555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:34:03.664611 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:03.670300 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:34:03.670443 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:34:03.672489 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:34:03.672686 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:03.675654 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:34:03.675732 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:03.677467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:34:03.677513 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:03.679489 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:34:03.679545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:34:03.681715 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:34:03.681771 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:34:03.683714 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:34:03.683769 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:03.695920 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:34:03.697036 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:34:03.697101 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:03.699401 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:34:03.699458 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:34:03.701670 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:34:03.701724 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:03.704225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:03.704327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:03.706912 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:34:03.707049 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:34:03.787331 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:34:03.788483 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:34:03.791111 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:34:03.793122 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:34:03.794085 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:03.806995 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:34:03.813841 systemd[1]: Switching root. Sep 12 17:34:03.847657 systemd-journald[193]: Journal stopped Sep 12 17:34:05.956236 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 12 17:34:05.956322 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:34:05.956353 kernel: SELinux: policy capability open_perms=1 Sep 12 17:34:05.956368 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:34:05.956384 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:34:05.956406 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:34:05.956423 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:34:05.956438 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:34:05.956450 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:34:05.956467 kernel: audit: type=1403 audit(1757698444.483:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:34:05.956481 systemd[1]: Successfully loaded SELinux policy in 67.504ms. Sep 12 17:34:05.956508 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.342ms. Sep 12 17:34:05.956525 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:34:05.956540 systemd[1]: Detected virtualization kvm. Sep 12 17:34:05.956555 systemd[1]: Detected architecture x86-64. Sep 12 17:34:05.956570 systemd[1]: Detected first boot. Sep 12 17:34:05.956590 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:34:05.956605 zram_generator::config[1089]: No configuration found. Sep 12 17:34:05.956626 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:34:05.956645 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:34:05.956661 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:34:05.956677 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:34:05.956692 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:34:05.956706 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:34:05.956718 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:34:05.956730 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:34:05.956743 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:34:05.956757 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:34:05.956769 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:34:05.956797 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:05.956810 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:05.956823 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:34:05.956835 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:34:05.956848 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:34:05.956860 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:34:05.956872 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:34:05.956887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:05.956899 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:34:05.956913 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:05.956929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:34:05.956942 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:34:05.956954 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:34:05.956977 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:34:05.956989 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:34:05.957003 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:34:05.957016 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:34:05.957028 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:05.957040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:05.957052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:05.957064 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:34:05.957077 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:34:05.957089 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:34:05.957101 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:34:05.957114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:05.957129 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:34:05.957142 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:34:05.957154 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:34:05.957167 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:34:05.957179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:05.957192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:34:05.957205 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:34:05.957222 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:05.957243 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:05.957257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:05.957270 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:34:05.957282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:05.957294 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:34:05.957306 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:34:05.957319 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:34:05.957331 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:34:05.957345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:34:05.957357 kernel: loop: module loaded Sep 12 17:34:05.957368 kernel: fuse: init (API version 7.39) Sep 12 17:34:05.957380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:34:05.957392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:34:05.957404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:34:05.957417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:05.957429 kernel: ACPI: bus type drm_connector registered Sep 12 17:34:05.957441 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:34:05.957459 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:34:05.957474 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:34:05.957489 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:34:05.957504 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:34:05.957519 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:34:05.957533 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:05.957548 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:34:05.957561 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:34:05.957573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:05.957587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:05.957599 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:05.957610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:05.957622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:05.957637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:05.957649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:34:05.957661 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:34:05.957673 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:05.957684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:05.957696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:34:05.957729 systemd-journald[1163]: Collecting audit messages is disabled. Sep 12 17:34:05.957750 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:05.957766 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:34:05.957778 systemd-journald[1163]: Journal started Sep 12 17:34:05.957818 systemd-journald[1163]: Runtime Journal (/run/log/journal/6b7b63e583f34f539727165aac8c09f6) is 6.0M, max 48.3M, 42.2M free. Sep 12 17:34:05.962879 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:34:05.979657 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:34:05.990939 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:34:05.997147 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:34:05.998661 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:34:06.006011 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:34:06.012254 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:34:06.013867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:06.016078 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:34:06.018914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:06.020314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:34:06.025621 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:34:06.028984 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:34:06.030627 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:34:06.058807 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:06.121394 systemd-journald[1163]: Time spent on flushing to /var/log/journal/6b7b63e583f34f539727165aac8c09f6 is 17.367ms for 988 entries. Sep 12 17:34:06.121394 systemd-journald[1163]: System Journal (/var/log/journal/6b7b63e583f34f539727165aac8c09f6) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:34:06.663459 systemd-journald[1163]: Received client request to flush runtime journal. Sep 12 17:34:06.126471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:06.137938 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:34:06.155641 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:34:06.258721 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 12 17:34:06.258740 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 12 17:34:06.265837 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:34:06.494444 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:34:06.497643 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:34:06.500249 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:34:06.509030 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:34:06.602697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:34:06.612124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:34:06.631447 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 12 17:34:06.631462 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 12 17:34:06.637091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:06.665953 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:34:07.373516 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:34:07.386930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:07.428414 systemd-udevd[1250]: Using default interface naming scheme 'v255'. Sep 12 17:34:07.449112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:07.484004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:07.504384 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:34:07.508515 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 17:34:07.514808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1268) Sep 12 17:34:07.568970 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:34:07.615047 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:34:07.621197 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:34:07.641199 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 17:34:07.648818 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:34:07.655714 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:34:07.672413 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:34:07.668374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:34:07.707985 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:34:07.710361 systemd-networkd[1269]: lo: Link UP Sep 12 17:34:07.710372 systemd-networkd[1269]: lo: Gained carrier Sep 12 17:34:07.712118 systemd-networkd[1269]: Enumeration completed Sep 12 17:34:07.712563 systemd-networkd[1269]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:07.712568 systemd-networkd[1269]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:07.714684 systemd-networkd[1269]: eth0: Link UP Sep 12 17:34:07.714689 systemd-networkd[1269]: eth0: Gained carrier Sep 12 17:34:07.714701 systemd-networkd[1269]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:07.715184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:07.717037 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:07.722730 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:34:07.754317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:07.755263 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:07.764325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:07.796116 systemd-networkd[1269]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:34:07.813852 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:34:07.827332 kernel: kvm_amd: TSC scaling supported Sep 12 17:34:07.827462 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:34:07.827491 kernel: kvm_amd: Nested Paging enabled Sep 12 17:34:07.827867 kernel: kvm_amd: LBR virtualization supported Sep 12 17:34:07.829066 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:34:07.829093 kernel: kvm_amd: Virtual GIF supported Sep 12 17:34:07.853381 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:34:07.852938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:07.892555 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:34:07.904094 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:34:07.916251 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:07.953462 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:34:07.969341 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:07.986124 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:34:07.994271 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:08.085950 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:34:08.087833 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:34:08.089275 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:34:08.089304 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:08.090497 systemd[1]: Reached target machines.target - Containers. Sep 12 17:34:08.092888 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:34:08.109024 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:34:08.144137 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:34:08.145602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:08.146877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:34:08.150343 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:34:08.153720 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:34:08.156481 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:34:08.169753 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:34:08.175815 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:34:08.257817 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:34:08.303848 kernel: loop1: detected capacity change from 0 to 140768 Sep 12 17:34:08.448811 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:34:08.585810 kernel: loop3: detected capacity change from 0 to 221472 Sep 12 17:34:08.630944 kernel: loop4: detected capacity change from 0 to 140768 Sep 12 17:34:08.643806 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:34:08.651548 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:34:08.652509 (sd-merge)[1322]: Merged extensions into '/usr'. Sep 12 17:34:08.657224 systemd[1]: Reloading requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:34:08.657244 systemd[1]: Reloading... Sep 12 17:34:08.759842 zram_generator::config[1351]: No configuration found. Sep 12 17:34:08.959113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:09.009985 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:34:09.042483 systemd[1]: Reloading finished in 384 ms. Sep 12 17:34:09.064163 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:34:09.065665 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:34:09.090182 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:34:09.113210 systemd[1]: Starting ensure-sysext.service... Sep 12 17:34:09.118064 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:34:09.136122 systemd[1]: Reloading requested from client PID 1395 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:34:09.136139 systemd[1]: Reloading... Sep 12 17:34:09.169211 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:34:09.169656 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:34:09.170903 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:34:09.171392 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Sep 12 17:34:09.171518 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Sep 12 17:34:09.175558 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:09.175571 systemd-tmpfiles[1396]: Skipping /boot Sep 12 17:34:09.196542 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:09.196558 systemd-tmpfiles[1396]: Skipping /boot Sep 12 17:34:09.204822 zram_generator::config[1430]: No configuration found. Sep 12 17:34:09.424193 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:09.501006 systemd-networkd[1269]: eth0: Gained IPv6LL Sep 12 17:34:09.511005 systemd[1]: Reloading finished in 374 ms. Sep 12 17:34:09.535688 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:34:09.538054 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:34:09.551655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:09.567511 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:09.572423 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:34:09.575808 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:34:09.581615 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:34:09.587434 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:34:09.594766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:09.596481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:09.603044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:09.609446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:09.627468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:09.629950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:09.630110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:09.633009 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:34:09.636159 augenrules[1501]: No rules Sep 12 17:34:09.637273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:09.637566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:09.639756 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:09.642019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:09.642303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:09.645774 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:09.646391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:09.664182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:09.664538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:09.666507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:09.672001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:09.674819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:09.689771 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:09.705149 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:34:09.706613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:09.708967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:34:09.736449 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:34:09.738630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:09.738900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:09.740636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:09.740894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:09.742776 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:09.743033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:09.754156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:09.754438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:09.757851 systemd-resolved[1480]: Positive Trust Anchors: Sep 12 17:34:09.757881 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:34:09.757914 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:34:09.760201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:09.762043 systemd-resolved[1480]: Defaulting to hostname 'linux'. Sep 12 17:34:09.763890 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:09.766207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:09.771017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:09.771460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:09.771601 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:34:09.771695 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:09.773363 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:34:09.776137 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:34:09.778563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:09.778845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:09.780618 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:09.780889 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:09.783317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:09.783567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:09.785644 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:09.785943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:09.790343 systemd[1]: Finished ensure-sysext.service. Sep 12 17:34:09.798441 systemd[1]: Reached target network.target - Network. Sep 12 17:34:09.801622 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:34:09.803024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:09.804478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:09.804564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:09.819001 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:34:09.896212 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:34:10.350853 systemd-resolved[1480]: Clock change detected. Flushing caches. Sep 12 17:34:10.350870 systemd-timesyncd[1547]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:34:10.350915 systemd-timesyncd[1547]: Initial clock synchronization to Fri 2025-09-12 17:34:10.350767 UTC. Sep 12 17:34:10.352033 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:10.353346 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:34:10.354837 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:34:10.356243 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:34:10.357722 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:34:10.357760 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:34:10.358781 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:34:10.360077 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:34:10.361447 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:34:10.362835 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:34:10.365094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:34:10.368925 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:34:10.372429 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:34:10.376313 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:34:10.377695 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:34:10.378922 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:10.380279 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:34:10.380328 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:10.380355 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:10.382132 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:34:10.385081 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:34:10.389572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:34:10.393520 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:34:10.396553 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:34:10.400449 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:34:10.402537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:10.408065 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:34:10.416895 jq[1555]: false Sep 12 17:34:10.415589 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:34:10.423946 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:34:10.426755 dbus-daemon[1553]: [system] SELinux support is enabled Sep 12 17:34:10.428690 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:34:10.435601 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:34:10.448231 extend-filesystems[1557]: Found loop3 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found loop4 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found loop5 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found sr0 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda1 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda2 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda3 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found usr Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda4 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda6 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda7 Sep 12 17:34:10.453714 extend-filesystems[1557]: Found vda9 Sep 12 17:34:10.453714 extend-filesystems[1557]: Checking size of /dev/vda9 Sep 12 17:34:10.448685 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:34:10.450919 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:34:10.468648 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:34:10.480522 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:34:10.485280 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:34:10.493424 jq[1588]: true Sep 12 17:34:10.494983 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:34:10.499548 extend-filesystems[1557]: Resized partition /dev/vda9 Sep 12 17:34:10.529702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1267) Sep 12 17:34:10.529744 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:34:10.495333 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:34:10.529847 extend-filesystems[1595]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:34:10.535492 update_engine[1585]: I20250912 17:34:10.504524 1585 main.cc:92] Flatcar Update Engine starting Sep 12 17:34:10.535492 update_engine[1585]: I20250912 17:34:10.505829 1585 update_check_scheduler.cc:74] Next update check in 4m22s Sep 12 17:34:10.507654 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:34:10.507966 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:34:10.521813 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:34:10.535054 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:34:10.535480 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:34:10.563930 (ntainerd)[1601]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:34:10.564925 jq[1600]: true Sep 12 17:34:10.565634 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:34:10.566052 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:34:10.569764 systemd-logind[1577]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:34:10.569789 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:34:10.578368 systemd-logind[1577]: New seat seat0. Sep 12 17:34:10.588323 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:34:10.595161 dbus-daemon[1553]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:34:10.601727 tar[1599]: linux-amd64/helm Sep 12 17:34:10.611766 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:34:10.642478 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:34:10.847433 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:34:11.046727 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:34:11.046990 extend-filesystems[1595]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:34:11.046990 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:34:11.046990 extend-filesystems[1595]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:34:10.847703 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:34:11.052723 extend-filesystems[1557]: Resized filesystem in /dev/vda9 Sep 12 17:34:10.847888 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:34:10.849534 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:34:10.849696 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:34:10.852096 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:34:10.857875 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:34:10.911596 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:34:11.047107 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:34:11.047587 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:34:11.076865 containerd[1601]: time="2025-09-12T17:34:11.076643292Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:34:11.090933 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:34:11.108395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:34:11.112551 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:34:11.128093 containerd[1601]: time="2025-09-12T17:34:11.127757929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:11.129994 containerd[1601]: time="2025-09-12T17:34:11.129915185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:11.130091 containerd[1601]: time="2025-09-12T17:34:11.129995666Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:34:11.130091 containerd[1601]: time="2025-09-12T17:34:11.130018800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:34:11.130526 containerd[1601]: time="2025-09-12T17:34:11.130487859Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:34:11.130599 containerd[1601]: time="2025-09-12T17:34:11.130536160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:11.130761 containerd[1601]: time="2025-09-12T17:34:11.130712030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:11.130788 containerd[1601]: time="2025-09-12T17:34:11.130753407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131116 containerd[1601]: time="2025-09-12T17:34:11.131090520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131116 containerd[1601]: time="2025-09-12T17:34:11.131108333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131159 containerd[1601]: time="2025-09-12T17:34:11.131122750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131159 containerd[1601]: time="2025-09-12T17:34:11.131132378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131252 containerd[1601]: time="2025-09-12T17:34:11.131237696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131536 containerd[1601]: time="2025-09-12T17:34:11.131520166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131806 containerd[1601]: time="2025-09-12T17:34:11.131786676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:11.131833 containerd[1601]: time="2025-09-12T17:34:11.131808376Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:34:11.131939 containerd[1601]: time="2025-09-12T17:34:11.131916880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:34:11.132026 containerd[1601]: time="2025-09-12T17:34:11.131999745Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:34:11.187188 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:34:11.189187 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:34:11.197192 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:34:11.197733 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:34:11.207204 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:34:11.228502 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:34:11.245723 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:34:11.272562 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:34:11.278738 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:34:11.438904 containerd[1601]: time="2025-09-12T17:34:11.438138223Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:34:11.438904 containerd[1601]: time="2025-09-12T17:34:11.438331236Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.443791558Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.443862551Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.443887898Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.444185917Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.444693509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.444964918Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.444981149Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.444997299Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.445013189Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.445034449Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.445053164Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.445072150Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.445092377Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.468482 containerd[1601]: time="2025-09-12T17:34:11.445108498Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.462046 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445123736Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445138033Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445167458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445183659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445197014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445224315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445241908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445259511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445271293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445287173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445301359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445318171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445329102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469131 containerd[1601]: time="2025-09-12T17:34:11.445343068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445357956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445375278Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445400986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445428428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445438647Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445534647Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445569122Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445583549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445594730Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445604057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445629595Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445649883Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:34:11.469441 containerd[1601]: time="2025-09-12T17:34:11.445662347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.445980143Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.446038582Z" level=info msg="Connect containerd service" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.458504715Z" level=info msg="using legacy CRI server" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.458542115Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.458768911Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.460122930Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.460368100Z" level=info msg="Start subscribing containerd event" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.460567404Z" level=info msg="Start recovering state" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.460693200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.460832792Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.461016777Z" level=info msg="Start event monitor" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.461071390Z" level=info msg="Start snapshots syncer" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.461104912Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.461356304Z" level=info msg="Start streaming server" Sep 12 17:34:11.469767 containerd[1601]: time="2025-09-12T17:34:11.463708326Z" level=info msg="containerd successfully booted in 0.403170s" Sep 12 17:34:11.755363 tar[1599]: linux-amd64/LICENSE Sep 12 17:34:11.755363 tar[1599]: linux-amd64/README.md Sep 12 17:34:11.772812 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:34:11.871318 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:34:11.889884 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:48304.service - OpenSSH per-connection server daemon (10.0.0.1:48304). Sep 12 17:34:11.947381 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 48304 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:34:11.949810 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:11.958619 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:34:11.991986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:34:12.006847 systemd-logind[1577]: New session 1 of user core. Sep 12 17:34:12.017620 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:34:12.029737 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:34:12.033974 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:34:12.165460 systemd[1685]: Queued start job for default target default.target. Sep 12 17:34:12.165889 systemd[1685]: Created slice app.slice - User Application Slice. Sep 12 17:34:12.165907 systemd[1685]: Reached target paths.target - Paths. Sep 12 17:34:12.165920 systemd[1685]: Reached target timers.target - Timers. Sep 12 17:34:12.176581 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:34:12.184297 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:34:12.184384 systemd[1685]: Reached target sockets.target - Sockets. Sep 12 17:34:12.184399 systemd[1685]: Reached target basic.target - Basic System. Sep 12 17:34:12.184492 systemd[1685]: Reached target default.target - Main User Target. Sep 12 17:34:12.184530 systemd[1685]: Startup finished in 142ms. Sep 12 17:34:12.185046 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:34:12.187946 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:34:12.248843 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:48308.service - OpenSSH per-connection server daemon (10.0.0.1:48308). Sep 12 17:34:12.279688 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 48308 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:34:12.282032 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:12.287244 systemd-logind[1577]: New session 2 of user core. Sep 12 17:34:12.295058 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:34:12.341110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:12.342919 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:34:12.345808 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:12.348386 systemd[1]: Startup finished in 7.519s (kernel) + 7.474s (userspace) = 14.993s. Sep 12 17:34:12.360540 sshd[1697]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:12.368177 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:48310.service - OpenSSH per-connection server daemon (10.0.0.1:48310). Sep 12 17:34:12.368851 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:48308.service: Deactivated successfully. Sep 12 17:34:12.373190 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:34:12.374047 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:34:12.376340 systemd-logind[1577]: Removed session 2. Sep 12 17:34:12.398116 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 48310 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:34:12.401209 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:12.407172 systemd-logind[1577]: New session 3 of user core. Sep 12 17:34:12.505029 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:34:12.557595 sshd[1712]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:12.566193 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:48326.service - OpenSSH per-connection server daemon (10.0.0.1:48326). Sep 12 17:34:12.567110 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:48310.service: Deactivated successfully. Sep 12 17:34:12.569845 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:34:12.570738 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:34:12.573104 systemd-logind[1577]: Removed session 3. Sep 12 17:34:12.599331 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 48326 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:34:12.600009 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:12.604940 systemd-logind[1577]: New session 4 of user core. Sep 12 17:34:12.616812 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:34:12.677160 sshd[1729]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:12.690772 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:48336.service - OpenSSH per-connection server daemon (10.0.0.1:48336). Sep 12 17:34:12.691365 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:48326.service: Deactivated successfully. Sep 12 17:34:12.693804 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:34:12.696096 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:34:12.697340 systemd-logind[1577]: Removed session 4. Sep 12 17:34:12.721568 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 48336 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:34:12.723674 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:12.728225 systemd-logind[1577]: New session 5 of user core. Sep 12 17:34:12.740663 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:34:12.803110 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:34:12.803588 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:12.830789 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:12.832821 sshd[1736]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:12.841679 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:48352.service - OpenSSH per-connection server daemon (10.0.0.1:48352). Sep 12 17:34:12.842323 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:48336.service: Deactivated successfully. Sep 12 17:34:12.845095 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:34:12.847587 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:34:12.848943 systemd-logind[1577]: Removed session 5. Sep 12 17:34:12.870095 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 48352 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:34:12.872436 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:12.877358 systemd-logind[1577]: New session 6 of user core. Sep 12 17:34:12.887709 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:34:13.070157 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:34:13.070604 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:13.075149 sudo[1754]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:13.082155 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:34:13.082527 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:13.096606 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:13.102214 auditctl[1757]: No rules Sep 12 17:34:13.102844 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:34:13.103243 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:13.106888 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:13.143398 augenrules[1776]: No rules Sep 12 17:34:13.146023 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:13.147739 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:13.153079 sshd[1747]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:13.158507 kubelet[1709]: E0912 17:34:13.158452 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:13.164674 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:48356.service - OpenSSH per-connection server daemon (10.0.0.1:48356). Sep 12 17:34:13.164990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:13.165184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:13.166141 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:48352.service: Deactivated successfully. Sep 12 17:34:13.168534 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:34:13.170161 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:34:13.172309 systemd-logind[1577]: Removed session 6. Sep 12 17:34:13.200364 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 48356 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:34:13.202058 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:13.207256 systemd-logind[1577]: New session 7 of user core. Sep 12 17:34:13.220810 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:34:13.277114 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:34:13.277514 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:13.570665 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:34:13.570911 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:34:13.872323 dockerd[1809]: time="2025-09-12T17:34:13.872163669Z" level=info msg="Starting up" Sep 12 17:34:15.913820 dockerd[1809]: time="2025-09-12T17:34:15.913760582Z" level=info msg="Loading containers: start." Sep 12 17:34:16.509902 kernel: Initializing XFRM netlink socket Sep 12 17:34:16.608560 systemd-networkd[1269]: docker0: Link UP Sep 12 17:34:16.747601 dockerd[1809]: time="2025-09-12T17:34:16.747278954Z" level=info msg="Loading containers: done." Sep 12 17:34:16.829165 dockerd[1809]: time="2025-09-12T17:34:16.828978499Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:34:16.829165 dockerd[1809]: time="2025-09-12T17:34:16.829113502Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:34:16.829456 dockerd[1809]: time="2025-09-12T17:34:16.829275185Z" level=info msg="Daemon has completed initialization" Sep 12 17:34:17.062979 dockerd[1809]: time="2025-09-12T17:34:17.062780024Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:34:17.063030 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:34:17.925135 containerd[1601]: time="2025-09-12T17:34:17.925059013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:34:20.047825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125870676.mount: Deactivated successfully. Sep 12 17:34:23.415582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:34:23.427732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:23.668669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:23.674881 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:24.374292 kubelet[2014]: E0912 17:34:24.374211 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:24.381750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:24.382057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:27.418148 containerd[1601]: time="2025-09-12T17:34:27.418090005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:27.466480 containerd[1601]: time="2025-09-12T17:34:27.466432422Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:34:27.498594 containerd[1601]: time="2025-09-12T17:34:27.498526431Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:27.535686 containerd[1601]: time="2025-09-12T17:34:27.535636138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:27.536648 containerd[1601]: time="2025-09-12T17:34:27.536619403Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 9.611476613s" Sep 12 17:34:27.536705 containerd[1601]: time="2025-09-12T17:34:27.536673073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:34:27.537766 containerd[1601]: time="2025-09-12T17:34:27.537730978Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:34:29.549822 containerd[1601]: time="2025-09-12T17:34:29.549724142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.560171 containerd[1601]: time="2025-09-12T17:34:29.560080858Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:34:29.574332 containerd[1601]: time="2025-09-12T17:34:29.574271265Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.589692 containerd[1601]: time="2025-09-12T17:34:29.589608944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:29.591391 containerd[1601]: time="2025-09-12T17:34:29.591324793Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 2.053557457s" Sep 12 17:34:29.591391 containerd[1601]: time="2025-09-12T17:34:29.591374987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:34:29.592037 containerd[1601]: time="2025-09-12T17:34:29.591976325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:34:34.549351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:34:34.563594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:34.738023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:34.743274 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:35.538490 kubelet[2059]: E0912 17:34:35.538358 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:35.543307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:35.543695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:36.441592 containerd[1601]: time="2025-09-12T17:34:36.441520572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:36.475075 containerd[1601]: time="2025-09-12T17:34:36.475030547Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:34:36.513691 containerd[1601]: time="2025-09-12T17:34:36.513653342Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:36.560171 containerd[1601]: time="2025-09-12T17:34:36.560110673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:36.561704 containerd[1601]: time="2025-09-12T17:34:36.561653487Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 6.969630465s" Sep 12 17:34:36.561704 containerd[1601]: time="2025-09-12T17:34:36.561697960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:34:36.562906 containerd[1601]: time="2025-09-12T17:34:36.562872363Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:34:39.328019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126378844.mount: Deactivated successfully. Sep 12 17:34:39.710691 containerd[1601]: time="2025-09-12T17:34:39.710541932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.711726 containerd[1601]: time="2025-09-12T17:34:39.711675999Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:34:39.713440 containerd[1601]: time="2025-09-12T17:34:39.713395043Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.715919 containerd[1601]: time="2025-09-12T17:34:39.715859816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.716394 containerd[1601]: time="2025-09-12T17:34:39.716361728Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 3.153454309s" Sep 12 17:34:39.716457 containerd[1601]: time="2025-09-12T17:34:39.716392766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:34:39.716888 containerd[1601]: time="2025-09-12T17:34:39.716852959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:34:40.159756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510752181.mount: Deactivated successfully. Sep 12 17:34:41.114037 containerd[1601]: time="2025-09-12T17:34:41.113955234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.114753 containerd[1601]: time="2025-09-12T17:34:41.114698628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:34:41.116183 containerd[1601]: time="2025-09-12T17:34:41.116124693Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.119847 containerd[1601]: time="2025-09-12T17:34:41.119784338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.121059 containerd[1601]: time="2025-09-12T17:34:41.121019715Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.404136569s" Sep 12 17:34:41.121105 containerd[1601]: time="2025-09-12T17:34:41.121061724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:34:41.121803 containerd[1601]: time="2025-09-12T17:34:41.121612667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:34:41.732641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148668789.mount: Deactivated successfully. Sep 12 17:34:41.808510 containerd[1601]: time="2025-09-12T17:34:41.808460377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.809316 containerd[1601]: time="2025-09-12T17:34:41.809284904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:34:41.810509 containerd[1601]: time="2025-09-12T17:34:41.810468103Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.813377 containerd[1601]: time="2025-09-12T17:34:41.813334379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.814044 containerd[1601]: time="2025-09-12T17:34:41.813987274Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 692.326176ms" Sep 12 17:34:41.814044 containerd[1601]: time="2025-09-12T17:34:41.814035114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:34:41.814550 containerd[1601]: time="2025-09-12T17:34:41.814511908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:34:42.347316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198921586.mount: Deactivated successfully. Sep 12 17:34:45.549772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:34:45.644732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:45.830911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:45.836841 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:45.889364 kubelet[2200]: E0912 17:34:45.889282 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:45.894138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:45.894542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:46.979703 containerd[1601]: time="2025-09-12T17:34:46.979595722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:47.005193 containerd[1601]: time="2025-09-12T17:34:47.005141300Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:34:47.036601 containerd[1601]: time="2025-09-12T17:34:47.036535004Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:47.049488 containerd[1601]: time="2025-09-12T17:34:47.049438560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:47.050827 containerd[1601]: time="2025-09-12T17:34:47.050785936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.236244022s" Sep 12 17:34:47.050827 containerd[1601]: time="2025-09-12T17:34:47.050819039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:34:49.278583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:49.289629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:49.316691 systemd[1]: Reloading requested from client PID 2238 ('systemctl') (unit session-7.scope)... Sep 12 17:34:49.316708 systemd[1]: Reloading... Sep 12 17:34:49.432447 zram_generator::config[2280]: No configuration found. Sep 12 17:34:50.842134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:50.918503 systemd[1]: Reloading finished in 1601 ms. Sep 12 17:34:50.965798 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:34:50.965910 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:34:50.966291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:50.982809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:51.174293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:51.179240 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:34:51.259005 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:51.259005 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:34:51.259005 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:51.259564 kubelet[2337]: I0912 17:34:51.259048 2337 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:34:51.493853 kubelet[2337]: I0912 17:34:51.493695 2337 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:34:51.493853 kubelet[2337]: I0912 17:34:51.493731 2337 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:34:51.494029 kubelet[2337]: I0912 17:34:51.493981 2337 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:34:51.525983 kubelet[2337]: E0912 17:34:51.525920 2337 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.526972 kubelet[2337]: I0912 17:34:51.526949 2337 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:34:51.535096 kubelet[2337]: E0912 17:34:51.535043 2337 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:34:51.535096 kubelet[2337]: I0912 17:34:51.535075 2337 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:34:51.541780 kubelet[2337]: I0912 17:34:51.541717 2337 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:34:51.543072 kubelet[2337]: I0912 17:34:51.543023 2337 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:34:51.543369 kubelet[2337]: I0912 17:34:51.543297 2337 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:34:51.543629 kubelet[2337]: I0912 17:34:51.543357 2337 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:34:51.543767 kubelet[2337]: I0912 17:34:51.543632 2337 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:34:51.543767 kubelet[2337]: I0912 17:34:51.543645 2337 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:34:51.543842 kubelet[2337]: I0912 17:34:51.543831 2337 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:51.546984 kubelet[2337]: I0912 17:34:51.546945 2337 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:34:51.547050 kubelet[2337]: I0912 17:34:51.546999 2337 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:34:51.547079 kubelet[2337]: I0912 17:34:51.547066 2337 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:34:51.547125 kubelet[2337]: I0912 17:34:51.547103 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:34:51.552262 kubelet[2337]: I0912 17:34:51.552233 2337 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:34:51.552830 kubelet[2337]: I0912 17:34:51.552807 2337 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:34:51.552908 kubelet[2337]: W0912 17:34:51.552896 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:34:51.553972 kubelet[2337]: W0912 17:34:51.553890 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:51.554319 kubelet[2337]: E0912 17:34:51.554281 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.557238 kubelet[2337]: I0912 17:34:51.555425 2337 server.go:1274] "Started kubelet" Sep 12 17:34:51.557238 kubelet[2337]: I0912 17:34:51.556552 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:34:51.557238 kubelet[2337]: W0912 17:34:51.556773 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:51.557238 kubelet[2337]: E0912 17:34:51.556818 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.557238 kubelet[2337]: I0912 17:34:51.556966 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:34:51.557238 kubelet[2337]: I0912 17:34:51.557135 2337 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:34:51.557511 kubelet[2337]: I0912 17:34:51.557492 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:34:51.557956 kubelet[2337]: I0912 17:34:51.557934 2337 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:34:51.560253 kubelet[2337]: I0912 17:34:51.560225 2337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:34:51.560883 kubelet[2337]: I0912 17:34:51.560859 2337 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:34:51.561018 kubelet[2337]: I0912 17:34:51.560999 2337 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:34:51.561085 kubelet[2337]: I0912 17:34:51.561069 2337 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:34:51.561820 kubelet[2337]: W0912 17:34:51.561397 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:51.561893 kubelet[2337]: E0912 17:34:51.561816 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.561977 kubelet[2337]: I0912 17:34:51.561958 2337 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:34:51.562045 kubelet[2337]: I0912 17:34:51.562026 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:34:51.562241 kubelet[2337]: E0912 17:34:51.562196 2337 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:34:51.564439 kubelet[2337]: E0912 17:34:51.562668 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.564439 kubelet[2337]: E0912 17:34:51.562734 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" Sep 12 17:34:51.564439 kubelet[2337]: E0912 17:34:51.561971 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499772365db0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:34:51.555371786 +0000 UTC m=+0.371195975,LastTimestamp:2025-09-12 17:34:51.555371786 +0000 UTC m=+0.371195975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:34:51.564439 kubelet[2337]: I0912 17:34:51.563756 2337 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:34:51.593061 kubelet[2337]: I0912 17:34:51.593012 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:34:51.594500 kubelet[2337]: I0912 17:34:51.594393 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:34:51.594544 kubelet[2337]: I0912 17:34:51.594514 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:34:51.594612 kubelet[2337]: I0912 17:34:51.594544 2337 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:34:51.594644 kubelet[2337]: E0912 17:34:51.594603 2337 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:34:51.596194 kubelet[2337]: I0912 17:34:51.595947 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:34:51.596194 kubelet[2337]: I0912 17:34:51.595968 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:34:51.596194 kubelet[2337]: W0912 17:34:51.595947 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:51.596194 kubelet[2337]: I0912 17:34:51.595989 2337 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:51.596194 kubelet[2337]: E0912 17:34:51.596084 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.663359 kubelet[2337]: E0912 17:34:51.663288 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.695380 kubelet[2337]: E0912 17:34:51.695348 2337 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:34:51.763341 kubelet[2337]: E0912 17:34:51.763203 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" Sep 12 17:34:51.764221 kubelet[2337]: E0912 17:34:51.764200 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.864979 kubelet[2337]: E0912 17:34:51.864916 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.896232 kubelet[2337]: E0912 17:34:51.896168 2337 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:34:51.965847 kubelet[2337]: E0912 17:34:51.965782 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.066174 kubelet[2337]: E0912 17:34:52.066115 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.164225 kubelet[2337]: E0912 17:34:52.164163 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" Sep 12 17:34:52.166243 kubelet[2337]: E0912 17:34:52.166203 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.266918 kubelet[2337]: E0912 17:34:52.266835 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.297094 kubelet[2337]: E0912 17:34:52.297033 2337 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:34:52.316085 kubelet[2337]: E0912 17:34:52.315958 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499772365db0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:34:51.555371786 +0000 UTC m=+0.371195975,LastTimestamp:2025-09-12 17:34:51.555371786 +0000 UTC m=+0.371195975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:34:52.367760 kubelet[2337]: E0912 17:34:52.367566 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.468256 kubelet[2337]: E0912 17:34:52.468182 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.536218 kubelet[2337]: W0912 17:34:52.536136 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:52.536218 kubelet[2337]: E0912 17:34:52.536214 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:52.569143 kubelet[2337]: E0912 17:34:52.569069 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.631390 kubelet[2337]: W0912 17:34:52.631240 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:52.631390 kubelet[2337]: E0912 17:34:52.631307 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:52.669391 kubelet[2337]: E0912 17:34:52.669300 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.754156 kubelet[2337]: W0912 17:34:52.754056 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:52.754156 kubelet[2337]: E0912 17:34:52.754147 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:52.769884 kubelet[2337]: E0912 17:34:52.769831 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.870541 kubelet[2337]: E0912 17:34:52.870480 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:52.964521 kubelet[2337]: W0912 17:34:52.964330 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:52.964521 kubelet[2337]: E0912 17:34:52.964442 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:52.964805 kubelet[2337]: E0912 17:34:52.964764 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="1.6s" Sep 12 17:34:52.971194 kubelet[2337]: E0912 17:34:52.971158 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:53.071396 kubelet[2337]: E0912 17:34:53.071321 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:53.097616 kubelet[2337]: E0912 17:34:53.097529 2337 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:34:53.172307 kubelet[2337]: E0912 17:34:53.172237 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:53.273109 kubelet[2337]: E0912 17:34:53.272945 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:53.373788 kubelet[2337]: E0912 17:34:53.373710 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:53.474438 kubelet[2337]: E0912 17:34:53.474373 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:53.520189 kubelet[2337]: I0912 17:34:53.520141 2337 policy_none.go:49] "None policy: Start" Sep 12 17:34:53.521077 kubelet[2337]: I0912 17:34:53.521056 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:34:53.521163 kubelet[2337]: I0912 17:34:53.521082 2337 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:34:53.575003 kubelet[2337]: E0912 17:34:53.574943 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:53.615740 kubelet[2337]: I0912 17:34:53.615689 2337 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:34:53.615975 kubelet[2337]: I0912 17:34:53.615947 2337 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:34:53.616021 kubelet[2337]: I0912 17:34:53.615974 2337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:34:53.616920 kubelet[2337]: I0912 17:34:53.616815 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:34:53.617811 kubelet[2337]: E0912 17:34:53.617792 2337 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:34:53.718113 kubelet[2337]: I0912 17:34:53.718055 2337 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:53.718513 kubelet[2337]: E0912 17:34:53.718485 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 17:34:53.726504 kubelet[2337]: E0912 17:34:53.726476 2337 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:53.921074 kubelet[2337]: I0912 17:34:53.920433 2337 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:53.921074 kubelet[2337]: E0912 17:34:53.920799 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 17:34:54.326372 kubelet[2337]: I0912 17:34:54.326337 2337 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:54.326907 kubelet[2337]: E0912 17:34:54.326758 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 17:34:54.565658 kubelet[2337]: E0912 17:34:54.565583 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="3.2s" Sep 12 17:34:54.706147 kubelet[2337]: W0912 17:34:54.706018 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:54.706147 kubelet[2337]: E0912 17:34:54.706068 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:54.759872 kubelet[2337]: W0912 17:34:54.759812 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:54.759872 kubelet[2337]: E0912 17:34:54.759876 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:54.781542 kubelet[2337]: I0912 17:34:54.781467 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/047551d3970e2d9b26ca7591d3d68b06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"047551d3970e2d9b26ca7591d3d68b06\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:54.781542 kubelet[2337]: I0912 17:34:54.781523 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:54.781542 kubelet[2337]: I0912 17:34:54.781546 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:54.781542 kubelet[2337]: I0912 17:34:54.781566 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:54.781842 kubelet[2337]: I0912 17:34:54.781594 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:54.781842 kubelet[2337]: I0912 17:34:54.781611 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:34:54.781842 kubelet[2337]: I0912 17:34:54.781628 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/047551d3970e2d9b26ca7591d3d68b06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"047551d3970e2d9b26ca7591d3d68b06\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:54.781842 kubelet[2337]: I0912 17:34:54.781645 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/047551d3970e2d9b26ca7591d3d68b06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"047551d3970e2d9b26ca7591d3d68b06\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:54.781842 kubelet[2337]: I0912 17:34:54.781665 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:55.004997 kubelet[2337]: E0912 17:34:55.004843 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.005761 kubelet[2337]: E0912 17:34:55.005727 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.005839 containerd[1601]: time="2025-09-12T17:34:55.005761207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:047551d3970e2d9b26ca7591d3d68b06,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:55.007556 kubelet[2337]: E0912 17:34:55.007513 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.008100 containerd[1601]: time="2025-09-12T17:34:55.008056644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:55.035442 containerd[1601]: time="2025-09-12T17:34:55.035382656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:55.128669 kubelet[2337]: I0912 17:34:55.128633 2337 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:55.129120 kubelet[2337]: E0912 17:34:55.129064 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 17:34:55.359709 update_engine[1585]: I20250912 17:34:55.359555 1585 update_attempter.cc:509] Updating boot flags... Sep 12 17:34:55.444534 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2379) Sep 12 17:34:55.447198 kubelet[2337]: W0912 17:34:55.447098 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:55.447198 kubelet[2337]: E0912 17:34:55.447153 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:55.488160 kubelet[2337]: W0912 17:34:55.488099 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Sep 12 17:34:55.488338 kubelet[2337]: E0912 17:34:55.488317 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:55.488437 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2379) Sep 12 17:34:55.520532 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2379) Sep 12 17:34:55.533954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535486462.mount: Deactivated successfully. Sep 12 17:34:55.538551 containerd[1601]: time="2025-09-12T17:34:55.538519616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:55.541305 containerd[1601]: time="2025-09-12T17:34:55.541249446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:34:55.545196 containerd[1601]: time="2025-09-12T17:34:55.544180077Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:55.546241 containerd[1601]: time="2025-09-12T17:34:55.546220830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:55.549189 containerd[1601]: time="2025-09-12T17:34:55.549121484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:34:55.550312 containerd[1601]: time="2025-09-12T17:34:55.550265114Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:55.551208 containerd[1601]: time="2025-09-12T17:34:55.551165362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:34:55.552821 containerd[1601]: time="2025-09-12T17:34:55.552797248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:55.554887 containerd[1601]: time="2025-09-12T17:34:55.554866876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.38358ms" Sep 12 17:34:55.556564 containerd[1601]: time="2025-09-12T17:34:55.556503431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.64029ms" Sep 12 17:34:55.561553 containerd[1601]: time="2025-09-12T17:34:55.561515993Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.368797ms" Sep 12 17:34:55.771279 containerd[1601]: time="2025-09-12T17:34:55.771031075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:55.771279 containerd[1601]: time="2025-09-12T17:34:55.771124672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:55.771279 containerd[1601]: time="2025-09-12T17:34:55.771143879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:55.771470 containerd[1601]: time="2025-09-12T17:34:55.771261071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:55.773802 containerd[1601]: time="2025-09-12T17:34:55.772805732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:55.773802 containerd[1601]: time="2025-09-12T17:34:55.772939857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:55.773802 containerd[1601]: time="2025-09-12T17:34:55.773106814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:55.773802 containerd[1601]: time="2025-09-12T17:34:55.773253021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:55.774775 containerd[1601]: time="2025-09-12T17:34:55.774480060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:55.774775 containerd[1601]: time="2025-09-12T17:34:55.774535364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:55.774775 containerd[1601]: time="2025-09-12T17:34:55.774549792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:55.774775 containerd[1601]: time="2025-09-12T17:34:55.774658028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:55.910714 containerd[1601]: time="2025-09-12T17:34:55.910646821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:047551d3970e2d9b26ca7591d3d68b06,Namespace:kube-system,Attempt:0,} returns sandbox id \"941f354fd27a5f734895c49e2a35f4c0558757b7c493a79acc51f4d0dda82136\"" Sep 12 17:34:55.912251 kubelet[2337]: E0912 17:34:55.912146 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.918371 containerd[1601]: time="2025-09-12T17:34:55.918315713Z" level=info msg="CreateContainer within sandbox \"941f354fd27a5f734895c49e2a35f4c0558757b7c493a79acc51f4d0dda82136\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:34:55.919802 containerd[1601]: time="2025-09-12T17:34:55.919762839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ddf02ae80c1fff9c8db8d65d8f9da8d7be038b3dc4e2c192fa2a767f5e333e0\"" Sep 12 17:34:55.921831 kubelet[2337]: E0912 17:34:55.921638 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.928754 containerd[1601]: time="2025-09-12T17:34:55.928696331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d643914ab114b6503fea1d1258232a8a15dbffb7a701236b1211f87fd0881cca\"" Sep 12 17:34:55.930297 containerd[1601]: time="2025-09-12T17:34:55.929140383Z" level=info msg="CreateContainer within sandbox \"6ddf02ae80c1fff9c8db8d65d8f9da8d7be038b3dc4e2c192fa2a767f5e333e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:34:55.930529 kubelet[2337]: E0912 17:34:55.930511 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.940155 containerd[1601]: time="2025-09-12T17:34:55.940091884Z" level=info msg="CreateContainer within sandbox \"d643914ab114b6503fea1d1258232a8a15dbffb7a701236b1211f87fd0881cca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:34:55.943522 containerd[1601]: time="2025-09-12T17:34:55.943349536Z" level=info msg="CreateContainer within sandbox \"941f354fd27a5f734895c49e2a35f4c0558757b7c493a79acc51f4d0dda82136\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8643f773988fe6e18905722c7fa6e9c9f5adfdf162136da06d6ac0d4b2a4a15c\"" Sep 12 17:34:55.945208 containerd[1601]: time="2025-09-12T17:34:55.945171663Z" level=info msg="StartContainer for \"8643f773988fe6e18905722c7fa6e9c9f5adfdf162136da06d6ac0d4b2a4a15c\"" Sep 12 17:34:55.959987 containerd[1601]: time="2025-09-12T17:34:55.959837041Z" level=info msg="CreateContainer within sandbox \"6ddf02ae80c1fff9c8db8d65d8f9da8d7be038b3dc4e2c192fa2a767f5e333e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0574999871ea4fad3bf300080ef5078ea0407eb181a35340d797d186fc0a7def\"" Sep 12 17:34:55.960631 containerd[1601]: time="2025-09-12T17:34:55.960605930Z" level=info msg="StartContainer for \"0574999871ea4fad3bf300080ef5078ea0407eb181a35340d797d186fc0a7def\"" Sep 12 17:34:55.970468 containerd[1601]: time="2025-09-12T17:34:55.970038238Z" level=info msg="CreateContainer within sandbox \"d643914ab114b6503fea1d1258232a8a15dbffb7a701236b1211f87fd0881cca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f99f339ea2c895b114231c9bb8eed0259576763fdd63ba4dc1b839f127cc0dc8\"" Sep 12 17:34:55.971949 containerd[1601]: time="2025-09-12T17:34:55.970745581Z" level=info msg="StartContainer for \"f99f339ea2c895b114231c9bb8eed0259576763fdd63ba4dc1b839f127cc0dc8\"" Sep 12 17:34:56.037328 containerd[1601]: time="2025-09-12T17:34:56.037184010Z" level=info msg="StartContainer for \"8643f773988fe6e18905722c7fa6e9c9f5adfdf162136da06d6ac0d4b2a4a15c\" returns successfully" Sep 12 17:34:56.078595 containerd[1601]: time="2025-09-12T17:34:56.075706505Z" level=info msg="StartContainer for \"f99f339ea2c895b114231c9bb8eed0259576763fdd63ba4dc1b839f127cc0dc8\" returns successfully" Sep 12 17:34:56.091440 containerd[1601]: time="2025-09-12T17:34:56.091349867Z" level=info msg="StartContainer for \"0574999871ea4fad3bf300080ef5078ea0407eb181a35340d797d186fc0a7def\" returns successfully" Sep 12 17:34:56.610261 kubelet[2337]: E0912 17:34:56.610233 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:56.612651 kubelet[2337]: E0912 17:34:56.612461 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:56.614075 kubelet[2337]: E0912 17:34:56.614061 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:56.732638 kubelet[2337]: I0912 17:34:56.732525 2337 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:57.377566 kubelet[2337]: I0912 17:34:57.377506 2337 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:34:57.377566 kubelet[2337]: E0912 17:34:57.377558 2337 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:34:57.386632 kubelet[2337]: E0912 17:34:57.386594 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:57.487230 kubelet[2337]: E0912 17:34:57.487178 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:57.587358 kubelet[2337]: E0912 17:34:57.587298 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:57.615906 kubelet[2337]: E0912 17:34:57.615872 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:57.688572 kubelet[2337]: E0912 17:34:57.688437 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:57.789417 kubelet[2337]: E0912 17:34:57.789344 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:57.889589 kubelet[2337]: E0912 17:34:57.889533 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:57.990331 kubelet[2337]: E0912 17:34:57.990170 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.090509 kubelet[2337]: E0912 17:34:58.090468 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.191279 kubelet[2337]: E0912 17:34:58.191224 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.292010 kubelet[2337]: E0912 17:34:58.291864 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.392067 kubelet[2337]: E0912 17:34:58.391990 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.493174 kubelet[2337]: E0912 17:34:58.493122 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.593432 kubelet[2337]: E0912 17:34:58.593366 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.694491 kubelet[2337]: E0912 17:34:58.694381 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.795056 kubelet[2337]: E0912 17:34:58.794977 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:58.895786 kubelet[2337]: E0912 17:34:58.895569 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:59.575743 kubelet[2337]: I0912 17:34:59.575709 2337 apiserver.go:52] "Watching apiserver" Sep 12 17:34:59.661220 kubelet[2337]: I0912 17:34:59.661175 2337 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:34:59.732397 systemd[1]: Reloading requested from client PID 2630 ('systemctl') (unit session-7.scope)... Sep 12 17:34:59.732456 systemd[1]: Reloading... Sep 12 17:34:59.821463 zram_generator::config[2672]: No configuration found. Sep 12 17:34:59.947250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:35:00.034922 systemd[1]: Reloading finished in 301 ms. Sep 12 17:35:00.075660 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:00.100287 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:35:00.100924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:00.110765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:00.308956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:00.314888 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:35:00.362444 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:00.362444 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:35:00.362444 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:00.362444 kubelet[2724]: I0912 17:35:00.362156 2724 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:35:00.371562 kubelet[2724]: I0912 17:35:00.371511 2724 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:35:00.371562 kubelet[2724]: I0912 17:35:00.371546 2724 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:35:00.371867 kubelet[2724]: I0912 17:35:00.371835 2724 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:35:00.373399 kubelet[2724]: I0912 17:35:00.373364 2724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:35:00.377620 kubelet[2724]: I0912 17:35:00.377572 2724 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:35:00.381143 kubelet[2724]: E0912 17:35:00.381098 2724 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:35:00.381143 kubelet[2724]: I0912 17:35:00.381138 2724 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:35:00.386755 kubelet[2724]: I0912 17:35:00.386721 2724 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:35:00.387346 kubelet[2724]: I0912 17:35:00.387331 2724 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:35:00.387542 kubelet[2724]: I0912 17:35:00.387503 2724 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:35:00.387728 kubelet[2724]: I0912 17:35:00.387539 2724 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:35:00.387809 kubelet[2724]: I0912 17:35:00.387730 2724 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:35:00.387809 kubelet[2724]: I0912 17:35:00.387739 2724 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:35:00.387809 kubelet[2724]: I0912 17:35:00.387770 2724 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:00.387891 kubelet[2724]: I0912 17:35:00.387876 2724 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:35:00.387891 kubelet[2724]: I0912 17:35:00.387890 2724 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:35:00.387930 kubelet[2724]: I0912 17:35:00.387923 2724 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:35:00.387953 kubelet[2724]: I0912 17:35:00.387934 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:35:00.388793 kubelet[2724]: I0912 17:35:00.388624 2724 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:35:00.391052 kubelet[2724]: I0912 17:35:00.389936 2724 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:35:00.391052 kubelet[2724]: I0912 17:35:00.390483 2724 server.go:1274] "Started kubelet" Sep 12 17:35:00.392966 kubelet[2724]: I0912 17:35:00.392702 2724 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:35:00.393834 kubelet[2724]: I0912 17:35:00.393809 2724 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:35:00.395564 kubelet[2724]: I0912 17:35:00.395525 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:35:00.395806 kubelet[2724]: I0912 17:35:00.395779 2724 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:35:00.397787 kubelet[2724]: E0912 17:35:00.397768 2724 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:35:00.402194 kubelet[2724]: I0912 17:35:00.402159 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:35:00.403450 kubelet[2724]: I0912 17:35:00.403213 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:35:00.403450 kubelet[2724]: I0912 17:35:00.403437 2724 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:35:00.406232 kubelet[2724]: I0912 17:35:00.405624 2724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:35:00.406232 kubelet[2724]: I0912 17:35:00.405912 2724 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:35:00.406448 kubelet[2724]: I0912 17:35:00.406385 2724 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:35:00.409713 kubelet[2724]: I0912 17:35:00.408577 2724 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:35:00.409713 kubelet[2724]: I0912 17:35:00.408594 2724 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:35:00.418469 kubelet[2724]: I0912 17:35:00.418374 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:35:00.419961 kubelet[2724]: I0912 17:35:00.419934 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:35:00.419961 kubelet[2724]: I0912 17:35:00.419956 2724 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:35:00.419961 kubelet[2724]: I0912 17:35:00.419977 2724 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:35:00.420152 kubelet[2724]: E0912 17:35:00.420018 2724 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:35:00.467980 kubelet[2724]: I0912 17:35:00.467944 2724 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:35:00.467980 kubelet[2724]: I0912 17:35:00.467968 2724 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:35:00.467980 kubelet[2724]: I0912 17:35:00.467990 2724 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:00.468195 kubelet[2724]: I0912 17:35:00.468177 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:35:00.468219 kubelet[2724]: I0912 17:35:00.468194 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:35:00.468219 kubelet[2724]: I0912 17:35:00.468218 2724 policy_none.go:49] "None policy: Start" Sep 12 17:35:00.469095 kubelet[2724]: I0912 17:35:00.469066 2724 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:35:00.469141 kubelet[2724]: I0912 17:35:00.469105 2724 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:35:00.469336 kubelet[2724]: I0912 17:35:00.469306 2724 state_mem.go:75] "Updated machine memory state" Sep 12 17:35:00.471043 kubelet[2724]: I0912 17:35:00.470999 2724 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:35:00.471245 kubelet[2724]: I0912 17:35:00.471219 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:35:00.471281 kubelet[2724]: I0912 17:35:00.471239 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:35:00.471715 kubelet[2724]: I0912 17:35:00.471687 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:35:00.580816 kubelet[2724]: I0912 17:35:00.580681 2724 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:00.587373 kubelet[2724]: I0912 17:35:00.587329 2724 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:35:00.587539 kubelet[2724]: I0912 17:35:00.587399 2724 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:35:00.607552 kubelet[2724]: I0912 17:35:00.607509 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:00.607620 kubelet[2724]: I0912 17:35:00.607561 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:00.607620 kubelet[2724]: I0912 17:35:00.607588 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/047551d3970e2d9b26ca7591d3d68b06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"047551d3970e2d9b26ca7591d3d68b06\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:00.607620 kubelet[2724]: I0912 17:35:00.607608 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/047551d3970e2d9b26ca7591d3d68b06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"047551d3970e2d9b26ca7591d3d68b06\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:00.607741 kubelet[2724]: I0912 17:35:00.607640 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:00.607741 kubelet[2724]: I0912 17:35:00.607660 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:00.607741 kubelet[2724]: I0912 17:35:00.607681 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:00.607741 kubelet[2724]: I0912 17:35:00.607702 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:35:00.607741 kubelet[2724]: I0912 17:35:00.607721 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/047551d3970e2d9b26ca7591d3d68b06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"047551d3970e2d9b26ca7591d3d68b06\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:00.831742 kubelet[2724]: E0912 17:35:00.831581 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:00.831903 kubelet[2724]: E0912 17:35:00.831804 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:00.831967 kubelet[2724]: E0912 17:35:00.831947 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:01.388991 kubelet[2724]: I0912 17:35:01.388941 2724 apiserver.go:52] "Watching apiserver" Sep 12 17:35:01.406788 kubelet[2724]: I0912 17:35:01.406756 2724 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:35:01.434969 kubelet[2724]: E0912 17:35:01.434486 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:01.440311 kubelet[2724]: E0912 17:35:01.440269 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:35:01.440505 kubelet[2724]: E0912 17:35:01.440472 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:01.441675 kubelet[2724]: E0912 17:35:01.441624 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:01.441846 kubelet[2724]: E0912 17:35:01.441826 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:01.455429 kubelet[2724]: I0912 17:35:01.455332 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.455275928 podStartE2EDuration="1.455275928s" podCreationTimestamp="2025-09-12 17:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:01.45506494 +0000 UTC m=+1.135666575" watchObservedRunningTime="2025-09-12 17:35:01.455275928 +0000 UTC m=+1.135877564" Sep 12 17:35:01.472374 kubelet[2724]: I0912 17:35:01.471809 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.471783547 podStartE2EDuration="1.471783547s" podCreationTimestamp="2025-09-12 17:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:01.464474708 +0000 UTC m=+1.145076354" watchObservedRunningTime="2025-09-12 17:35:01.471783547 +0000 UTC m=+1.152385182" Sep 12 17:35:01.472374 kubelet[2724]: I0912 17:35:01.471941 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.471936145 podStartE2EDuration="1.471936145s" podCreationTimestamp="2025-09-12 17:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:01.471735155 +0000 UTC m=+1.152336790" watchObservedRunningTime="2025-09-12 17:35:01.471936145 +0000 UTC m=+1.152537780" Sep 12 17:35:02.435779 kubelet[2724]: E0912 17:35:02.435745 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:02.436331 kubelet[2724]: E0912 17:35:02.435960 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:05.260128 kubelet[2724]: E0912 17:35:05.260034 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:05.923265 kubelet[2724]: I0912 17:35:05.923220 2724 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:35:05.923713 containerd[1601]: time="2025-09-12T17:35:05.923657417Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:35:05.924123 kubelet[2724]: I0912 17:35:05.923828 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:35:06.343659 kubelet[2724]: I0912 17:35:06.343604 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d1d722a-546d-4c81-b469-ac9b97c7e8a3-kube-proxy\") pod \"kube-proxy-hxhc6\" (UID: \"3d1d722a-546d-4c81-b469-ac9b97c7e8a3\") " pod="kube-system/kube-proxy-hxhc6" Sep 12 17:35:06.343659 kubelet[2724]: I0912 17:35:06.343646 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d1d722a-546d-4c81-b469-ac9b97c7e8a3-lib-modules\") pod \"kube-proxy-hxhc6\" (UID: \"3d1d722a-546d-4c81-b469-ac9b97c7e8a3\") " pod="kube-system/kube-proxy-hxhc6" Sep 12 17:35:06.343659 kubelet[2724]: I0912 17:35:06.343668 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnkqw\" (UniqueName: \"kubernetes.io/projected/3d1d722a-546d-4c81-b469-ac9b97c7e8a3-kube-api-access-fnkqw\") pod \"kube-proxy-hxhc6\" (UID: \"3d1d722a-546d-4c81-b469-ac9b97c7e8a3\") " pod="kube-system/kube-proxy-hxhc6" Sep 12 17:35:06.344271 kubelet[2724]: I0912 17:35:06.343690 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d1d722a-546d-4c81-b469-ac9b97c7e8a3-xtables-lock\") pod \"kube-proxy-hxhc6\" (UID: \"3d1d722a-546d-4c81-b469-ac9b97c7e8a3\") " pod="kube-system/kube-proxy-hxhc6" Sep 12 17:35:06.449794 kubelet[2724]: E0912 17:35:06.449758 2724 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:35:06.449794 kubelet[2724]: E0912 17:35:06.449786 2724 projected.go:194] Error preparing data for projected volume kube-api-access-fnkqw for pod kube-system/kube-proxy-hxhc6: configmap "kube-root-ca.crt" not found Sep 12 17:35:06.450014 kubelet[2724]: E0912 17:35:06.449833 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d1d722a-546d-4c81-b469-ac9b97c7e8a3-kube-api-access-fnkqw podName:3d1d722a-546d-4c81-b469-ac9b97c7e8a3 nodeName:}" failed. No retries permitted until 2025-09-12 17:35:06.949816392 +0000 UTC m=+6.630418028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fnkqw" (UniqueName: "kubernetes.io/projected/3d1d722a-546d-4c81-b469-ac9b97c7e8a3-kube-api-access-fnkqw") pod "kube-proxy-hxhc6" (UID: "3d1d722a-546d-4c81-b469-ac9b97c7e8a3") : configmap "kube-root-ca.crt" not found Sep 12 17:35:06.847767 kubelet[2724]: I0912 17:35:06.847724 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsd66\" (UniqueName: \"kubernetes.io/projected/9547f5bf-8b65-4341-b87a-d5da57fc8edb-kube-api-access-lsd66\") pod \"tigera-operator-58fc44c59b-6jl2d\" (UID: \"9547f5bf-8b65-4341-b87a-d5da57fc8edb\") " pod="tigera-operator/tigera-operator-58fc44c59b-6jl2d" Sep 12 17:35:06.847920 kubelet[2724]: I0912 17:35:06.847824 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9547f5bf-8b65-4341-b87a-d5da57fc8edb-var-lib-calico\") pod \"tigera-operator-58fc44c59b-6jl2d\" (UID: \"9547f5bf-8b65-4341-b87a-d5da57fc8edb\") " pod="tigera-operator/tigera-operator-58fc44c59b-6jl2d" Sep 12 17:35:07.105424 containerd[1601]: time="2025-09-12T17:35:07.105265741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-6jl2d,Uid:9547f5bf-8b65-4341-b87a-d5da57fc8edb,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:35:07.140944 containerd[1601]: time="2025-09-12T17:35:07.140817665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:07.140944 containerd[1601]: time="2025-09-12T17:35:07.140894148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:07.140944 containerd[1601]: time="2025-09-12T17:35:07.140910770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:07.141359 containerd[1601]: time="2025-09-12T17:35:07.141294774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:07.237286 kubelet[2724]: E0912 17:35:07.237235 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:07.238638 containerd[1601]: time="2025-09-12T17:35:07.237929725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxhc6,Uid:3d1d722a-546d-4c81-b469-ac9b97c7e8a3,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:07.285742 containerd[1601]: time="2025-09-12T17:35:07.285576596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:07.285742 containerd[1601]: time="2025-09-12T17:35:07.285702414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:07.285742 containerd[1601]: time="2025-09-12T17:35:07.285731028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:07.286059 containerd[1601]: time="2025-09-12T17:35:07.285901920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:07.798272 containerd[1601]: time="2025-09-12T17:35:07.798204267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxhc6,Uid:3d1d722a-546d-4c81-b469-ac9b97c7e8a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"776a2184422f6857dd78fdeeb9d30d09876327bb851765957a48f228fd7d997f\"" Sep 12 17:35:07.799068 kubelet[2724]: E0912 17:35:07.799042 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:07.804177 containerd[1601]: time="2025-09-12T17:35:07.803707624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-6jl2d,Uid:9547f5bf-8b65-4341-b87a-d5da57fc8edb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"580bd4011bd3fa0e1e3ad225ef0ba5e2c0c54f9b33279afb943cfc8b20da80a0\"" Sep 12 17:35:07.804691 containerd[1601]: time="2025-09-12T17:35:07.804621487Z" level=info msg="CreateContainer within sandbox \"776a2184422f6857dd78fdeeb9d30d09876327bb851765957a48f228fd7d997f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:35:07.805093 containerd[1601]: time="2025-09-12T17:35:07.805064774Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:35:07.821551 containerd[1601]: time="2025-09-12T17:35:07.821479337Z" level=info msg="CreateContainer within sandbox \"776a2184422f6857dd78fdeeb9d30d09876327bb851765957a48f228fd7d997f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f2704a0e1d076b38fe479f2648f1316874c9f0568b5e1fe5f2a590074b6bcc53\"" Sep 12 17:35:07.822154 containerd[1601]: time="2025-09-12T17:35:07.822106120Z" level=info msg="StartContainer for \"f2704a0e1d076b38fe479f2648f1316874c9f0568b5e1fe5f2a590074b6bcc53\"" Sep 12 17:35:07.892358 containerd[1601]: time="2025-09-12T17:35:07.892318624Z" level=info msg="StartContainer for \"f2704a0e1d076b38fe479f2648f1316874c9f0568b5e1fe5f2a590074b6bcc53\" returns successfully" Sep 12 17:35:08.446002 kubelet[2724]: E0912 17:35:08.445760 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:08.455570 kubelet[2724]: I0912 17:35:08.455494 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hxhc6" podStartSLOduration=2.455474523 podStartE2EDuration="2.455474523s" podCreationTimestamp="2025-09-12 17:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:08.454564166 +0000 UTC m=+8.135165821" watchObservedRunningTime="2025-09-12 17:35:08.455474523 +0000 UTC m=+8.136076158" Sep 12 17:35:08.819147 kubelet[2724]: E0912 17:35:08.819104 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:09.008394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257136178.mount: Deactivated successfully. Sep 12 17:35:09.360877 containerd[1601]: time="2025-09-12T17:35:09.360820118Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:09.361612 containerd[1601]: time="2025-09-12T17:35:09.361558198Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:35:09.362741 containerd[1601]: time="2025-09-12T17:35:09.362704328Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:09.365214 containerd[1601]: time="2025-09-12T17:35:09.365176367Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:09.366269 containerd[1601]: time="2025-09-12T17:35:09.366226496Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.561122388s" Sep 12 17:35:09.366309 containerd[1601]: time="2025-09-12T17:35:09.366266431Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:35:09.368327 containerd[1601]: time="2025-09-12T17:35:09.368280637Z" level=info msg="CreateContainer within sandbox \"580bd4011bd3fa0e1e3ad225ef0ba5e2c0c54f9b33279afb943cfc8b20da80a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:35:09.380052 containerd[1601]: time="2025-09-12T17:35:09.379994834Z" level=info msg="CreateContainer within sandbox \"580bd4011bd3fa0e1e3ad225ef0ba5e2c0c54f9b33279afb943cfc8b20da80a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d5937e5b72816e7beb90d8ef870b0d1e9ee435971a7a694c07107351de184d0f\"" Sep 12 17:35:09.381077 containerd[1601]: time="2025-09-12T17:35:09.381045834Z" level=info msg="StartContainer for \"d5937e5b72816e7beb90d8ef870b0d1e9ee435971a7a694c07107351de184d0f\"" Sep 12 17:35:09.451066 kubelet[2724]: E0912 17:35:09.451015 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:09.896921 containerd[1601]: time="2025-09-12T17:35:09.896864211Z" level=info msg="StartContainer for \"d5937e5b72816e7beb90d8ef870b0d1e9ee435971a7a694c07107351de184d0f\" returns successfully" Sep 12 17:35:11.212923 kubelet[2724]: E0912 17:35:11.212637 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:11.593433 kubelet[2724]: I0912 17:35:11.589045 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-6jl2d" podStartSLOduration=4.026660197 podStartE2EDuration="5.58902077s" podCreationTimestamp="2025-09-12 17:35:06 +0000 UTC" firstStartedPulling="2025-09-12 17:35:07.80472392 +0000 UTC m=+7.485325555" lastFinishedPulling="2025-09-12 17:35:09.367084493 +0000 UTC m=+9.047686128" observedRunningTime="2025-09-12 17:35:10.461384336 +0000 UTC m=+10.141985971" watchObservedRunningTime="2025-09-12 17:35:11.58902077 +0000 UTC m=+11.269622405" Sep 12 17:35:15.264889 kubelet[2724]: E0912 17:35:15.264855 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:15.806179 sudo[1791]: pam_unix(sudo:session): session closed for user root Sep 12 17:35:15.811312 sshd[1783]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:15.819268 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:48356.service: Deactivated successfully. Sep 12 17:35:15.823465 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:35:15.824854 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:35:15.828159 systemd-logind[1577]: Removed session 7. Sep 12 17:35:18.824054 kubelet[2724]: I0912 17:35:18.823993 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2ace3362-f07f-4d39-8055-0929124fb3ec-typha-certs\") pod \"calico-typha-74b5d6b5b4-5ncmx\" (UID: \"2ace3362-f07f-4d39-8055-0929124fb3ec\") " pod="calico-system/calico-typha-74b5d6b5b4-5ncmx" Sep 12 17:35:18.824054 kubelet[2724]: I0912 17:35:18.824045 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7qd\" (UniqueName: \"kubernetes.io/projected/2ace3362-f07f-4d39-8055-0929124fb3ec-kube-api-access-vs7qd\") pod \"calico-typha-74b5d6b5b4-5ncmx\" (UID: \"2ace3362-f07f-4d39-8055-0929124fb3ec\") " pod="calico-system/calico-typha-74b5d6b5b4-5ncmx" Sep 12 17:35:18.824054 kubelet[2724]: I0912 17:35:18.824068 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ace3362-f07f-4d39-8055-0929124fb3ec-tigera-ca-bundle\") pod \"calico-typha-74b5d6b5b4-5ncmx\" (UID: \"2ace3362-f07f-4d39-8055-0929124fb3ec\") " pod="calico-system/calico-typha-74b5d6b5b4-5ncmx" Sep 12 17:35:19.119545 kubelet[2724]: E0912 17:35:19.119138 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:19.119875 containerd[1601]: time="2025-09-12T17:35:19.119660919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b5d6b5b4-5ncmx,Uid:2ace3362-f07f-4d39-8055-0929124fb3ec,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:19.156927 containerd[1601]: time="2025-09-12T17:35:19.156801056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:19.157370 containerd[1601]: time="2025-09-12T17:35:19.156865968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:19.157370 containerd[1601]: time="2025-09-12T17:35:19.157209154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:19.157578 containerd[1601]: time="2025-09-12T17:35:19.157528524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:19.227679 kubelet[2724]: I0912 17:35:19.227562 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-policysync\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.227679 kubelet[2724]: I0912 17:35:19.227601 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-cni-bin-dir\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.227679 kubelet[2724]: I0912 17:35:19.227616 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-xtables-lock\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.227679 kubelet[2724]: I0912 17:35:19.227634 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-cni-log-dir\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.227679 kubelet[2724]: I0912 17:35:19.227648 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39f9b13e-5e06-449c-a46c-7a261ff2233d-node-certs\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.228304 kubelet[2724]: I0912 17:35:19.227664 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmq4x\" (UniqueName: \"kubernetes.io/projected/39f9b13e-5e06-449c-a46c-7a261ff2233d-kube-api-access-vmq4x\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.228304 kubelet[2724]: I0912 17:35:19.227679 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-cni-net-dir\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.228304 kubelet[2724]: I0912 17:35:19.227695 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-flexvol-driver-host\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.228304 kubelet[2724]: I0912 17:35:19.227710 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39f9b13e-5e06-449c-a46c-7a261ff2233d-tigera-ca-bundle\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.228304 kubelet[2724]: I0912 17:35:19.227726 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-var-lib-calico\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.228771 kubelet[2724]: I0912 17:35:19.227741 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-lib-modules\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.228771 kubelet[2724]: I0912 17:35:19.227757 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39f9b13e-5e06-449c-a46c-7a261ff2233d-var-run-calico\") pod \"calico-node-khhwh\" (UID: \"39f9b13e-5e06-449c-a46c-7a261ff2233d\") " pod="calico-system/calico-node-khhwh" Sep 12 17:35:19.237568 containerd[1601]: time="2025-09-12T17:35:19.237531848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b5d6b5b4-5ncmx,Uid:2ace3362-f07f-4d39-8055-0929124fb3ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"790536c370a3300e87be600a97ffbc3a604c0c6d68841752bda66c44b498806d\"" Sep 12 17:35:19.239520 kubelet[2724]: E0912 17:35:19.239466 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:19.241000 containerd[1601]: time="2025-09-12T17:35:19.240968320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:35:19.330321 kubelet[2724]: E0912 17:35:19.330233 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.330321 kubelet[2724]: W0912 17:35:19.330261 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.330321 kubelet[2724]: E0912 17:35:19.330311 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.332457 kubelet[2724]: E0912 17:35:19.332391 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.332457 kubelet[2724]: W0912 17:35:19.332434 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.332628 kubelet[2724]: E0912 17:35:19.332482 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.339131 kubelet[2724]: E0912 17:35:19.339107 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.339131 kubelet[2724]: W0912 17:35:19.339130 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.339219 kubelet[2724]: E0912 17:35:19.339153 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.489689 kubelet[2724]: E0912 17:35:19.489168 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:19.508376 containerd[1601]: time="2025-09-12T17:35:19.508321738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khhwh,Uid:39f9b13e-5e06-449c-a46c-7a261ff2233d,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:19.525002 kubelet[2724]: E0912 17:35:19.524959 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.525002 kubelet[2724]: W0912 17:35:19.524989 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.525002 kubelet[2724]: E0912 17:35:19.525015 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.525886 kubelet[2724]: E0912 17:35:19.525235 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.525886 kubelet[2724]: W0912 17:35:19.525249 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.525886 kubelet[2724]: E0912 17:35:19.525258 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.525886 kubelet[2724]: E0912 17:35:19.525748 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.525886 kubelet[2724]: W0912 17:35:19.525761 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.525886 kubelet[2724]: E0912 17:35:19.525774 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.526121 kubelet[2724]: E0912 17:35:19.526002 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.526121 kubelet[2724]: W0912 17:35:19.526022 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.526121 kubelet[2724]: E0912 17:35:19.526032 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.526301 kubelet[2724]: E0912 17:35:19.526254 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.526301 kubelet[2724]: W0912 17:35:19.526290 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.526301 kubelet[2724]: E0912 17:35:19.526300 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.528086 kubelet[2724]: E0912 17:35:19.526610 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.528086 kubelet[2724]: W0912 17:35:19.526624 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.528086 kubelet[2724]: E0912 17:35:19.526634 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.528086 kubelet[2724]: E0912 17:35:19.526901 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.528086 kubelet[2724]: W0912 17:35:19.526921 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.528086 kubelet[2724]: E0912 17:35:19.526931 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.528086 kubelet[2724]: E0912 17:35:19.527189 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.528086 kubelet[2724]: W0912 17:35:19.527220 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.528086 kubelet[2724]: E0912 17:35:19.527255 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.528086 kubelet[2724]: E0912 17:35:19.527584 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.528492 kubelet[2724]: W0912 17:35:19.527595 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.528492 kubelet[2724]: E0912 17:35:19.527606 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.528492 kubelet[2724]: E0912 17:35:19.527879 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.528492 kubelet[2724]: W0912 17:35:19.527890 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.528492 kubelet[2724]: E0912 17:35:19.527902 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.528492 kubelet[2724]: E0912 17:35:19.528184 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.528492 kubelet[2724]: W0912 17:35:19.528196 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.528492 kubelet[2724]: E0912 17:35:19.528209 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.528966 kubelet[2724]: E0912 17:35:19.528950 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.528966 kubelet[2724]: W0912 17:35:19.528963 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.529039 kubelet[2724]: E0912 17:35:19.528975 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.529450 kubelet[2724]: E0912 17:35:19.529393 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.529450 kubelet[2724]: W0912 17:35:19.529420 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.529450 kubelet[2724]: E0912 17:35:19.529432 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.529683 kubelet[2724]: E0912 17:35:19.529660 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.529683 kubelet[2724]: W0912 17:35:19.529673 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.529683 kubelet[2724]: E0912 17:35:19.529683 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.530020 kubelet[2724]: E0912 17:35:19.529915 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.530020 kubelet[2724]: W0912 17:35:19.529932 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.530020 kubelet[2724]: E0912 17:35:19.529948 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.530250 kubelet[2724]: E0912 17:35:19.530232 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.530250 kubelet[2724]: W0912 17:35:19.530245 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.530349 kubelet[2724]: E0912 17:35:19.530257 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.530646 kubelet[2724]: E0912 17:35:19.530626 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.530646 kubelet[2724]: W0912 17:35:19.530641 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.530751 kubelet[2724]: E0912 17:35:19.530652 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.530900 kubelet[2724]: E0912 17:35:19.530885 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.530900 kubelet[2724]: W0912 17:35:19.530897 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.530994 kubelet[2724]: E0912 17:35:19.530907 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.531139 kubelet[2724]: E0912 17:35:19.531125 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.531139 kubelet[2724]: W0912 17:35:19.531137 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.531222 kubelet[2724]: E0912 17:35:19.531148 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.531688 kubelet[2724]: E0912 17:35:19.531575 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.531688 kubelet[2724]: W0912 17:35:19.531593 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.531688 kubelet[2724]: E0912 17:35:19.531606 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.532188 kubelet[2724]: E0912 17:35:19.532004 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.532188 kubelet[2724]: W0912 17:35:19.532025 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.532188 kubelet[2724]: E0912 17:35:19.532038 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.532188 kubelet[2724]: I0912 17:35:19.532063 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45ffb3eb-a3d1-424f-9934-ed6fe54575da-kubelet-dir\") pod \"csi-node-driver-pg5rh\" (UID: \"45ffb3eb-a3d1-424f-9934-ed6fe54575da\") " pod="calico-system/csi-node-driver-pg5rh" Sep 12 17:35:19.532541 kubelet[2724]: E0912 17:35:19.532367 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.532541 kubelet[2724]: W0912 17:35:19.532380 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.532541 kubelet[2724]: E0912 17:35:19.532391 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.532541 kubelet[2724]: I0912 17:35:19.532511 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/45ffb3eb-a3d1-424f-9934-ed6fe54575da-varrun\") pod \"csi-node-driver-pg5rh\" (UID: \"45ffb3eb-a3d1-424f-9934-ed6fe54575da\") " pod="calico-system/csi-node-driver-pg5rh" Sep 12 17:35:19.532827 kubelet[2724]: E0912 17:35:19.532778 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.532827 kubelet[2724]: W0912 17:35:19.532797 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.532827 kubelet[2724]: E0912 17:35:19.532808 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.532827 kubelet[2724]: I0912 17:35:19.532823 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv5p4\" (UniqueName: \"kubernetes.io/projected/45ffb3eb-a3d1-424f-9934-ed6fe54575da-kube-api-access-hv5p4\") pod \"csi-node-driver-pg5rh\" (UID: \"45ffb3eb-a3d1-424f-9934-ed6fe54575da\") " pod="calico-system/csi-node-driver-pg5rh" Sep 12 17:35:19.533199 kubelet[2724]: E0912 17:35:19.533026 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.533199 kubelet[2724]: W0912 17:35:19.533035 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.533199 kubelet[2724]: E0912 17:35:19.533051 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.533199 kubelet[2724]: I0912 17:35:19.533066 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/45ffb3eb-a3d1-424f-9934-ed6fe54575da-registration-dir\") pod \"csi-node-driver-pg5rh\" (UID: \"45ffb3eb-a3d1-424f-9934-ed6fe54575da\") " pod="calico-system/csi-node-driver-pg5rh" Sep 12 17:35:19.533779 kubelet[2724]: E0912 17:35:19.533667 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.533779 kubelet[2724]: W0912 17:35:19.533689 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.533779 kubelet[2724]: E0912 17:35:19.533725 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.534270 kubelet[2724]: E0912 17:35:19.534168 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.534270 kubelet[2724]: W0912 17:35:19.534183 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.534476 kubelet[2724]: E0912 17:35:19.534393 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.534778 kubelet[2724]: E0912 17:35:19.534736 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.534778 kubelet[2724]: W0912 17:35:19.534750 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.535094 kubelet[2724]: E0912 17:35:19.535077 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.535186 kubelet[2724]: W0912 17:35:19.535171 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.535441 kubelet[2724]: E0912 17:35:19.535332 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.535557 kubelet[2724]: E0912 17:35:19.535538 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.535768 kubelet[2724]: E0912 17:35:19.535731 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.535817 kubelet[2724]: W0912 17:35:19.535768 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.535975 kubelet[2724]: E0912 17:35:19.535900 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.535975 kubelet[2724]: I0912 17:35:19.535945 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/45ffb3eb-a3d1-424f-9934-ed6fe54575da-socket-dir\") pod \"csi-node-driver-pg5rh\" (UID: \"45ffb3eb-a3d1-424f-9934-ed6fe54575da\") " pod="calico-system/csi-node-driver-pg5rh" Sep 12 17:35:19.536113 kubelet[2724]: E0912 17:35:19.536094 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.536113 kubelet[2724]: W0912 17:35:19.536108 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.536245 kubelet[2724]: E0912 17:35:19.536222 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.536358 kubelet[2724]: E0912 17:35:19.536342 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.536358 kubelet[2724]: W0912 17:35:19.536354 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.536468 kubelet[2724]: E0912 17:35:19.536365 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.536701 kubelet[2724]: E0912 17:35:19.536683 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.536701 kubelet[2724]: W0912 17:35:19.536696 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.536807 kubelet[2724]: E0912 17:35:19.536711 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.538777 kubelet[2724]: E0912 17:35:19.538757 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.538777 kubelet[2724]: W0912 17:35:19.538772 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.538883 kubelet[2724]: E0912 17:35:19.538784 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.539168 kubelet[2724]: E0912 17:35:19.539150 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.539168 kubelet[2724]: W0912 17:35:19.539165 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.539265 kubelet[2724]: E0912 17:35:19.539176 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.539473 kubelet[2724]: E0912 17:35:19.539443 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.539473 kubelet[2724]: W0912 17:35:19.539459 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.539473 kubelet[2724]: E0912 17:35:19.539472 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.542851 containerd[1601]: time="2025-09-12T17:35:19.542740909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:19.542851 containerd[1601]: time="2025-09-12T17:35:19.542811561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:19.542851 containerd[1601]: time="2025-09-12T17:35:19.542822882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:19.542991 containerd[1601]: time="2025-09-12T17:35:19.542930024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:19.586322 containerd[1601]: time="2025-09-12T17:35:19.586197252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khhwh,Uid:39f9b13e-5e06-449c-a46c-7a261ff2233d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e27fb40583a7c7fbf1640e7718d73e9e66f9ad66150ac69294c3660d4ba8aa2\"" Sep 12 17:35:19.641164 kubelet[2724]: E0912 17:35:19.641129 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.641164 kubelet[2724]: W0912 17:35:19.641153 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.641384 kubelet[2724]: E0912 17:35:19.641177 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.641571 kubelet[2724]: E0912 17:35:19.641541 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.641571 kubelet[2724]: W0912 17:35:19.641555 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.641571 kubelet[2724]: E0912 17:35:19.641572 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.641926 kubelet[2724]: E0912 17:35:19.641898 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.641926 kubelet[2724]: W0912 17:35:19.641922 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.642027 kubelet[2724]: E0912 17:35:19.641952 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.642219 kubelet[2724]: E0912 17:35:19.642191 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.642219 kubelet[2724]: W0912 17:35:19.642205 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.642219 kubelet[2724]: E0912 17:35:19.642222 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.642588 kubelet[2724]: E0912 17:35:19.642554 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.642588 kubelet[2724]: W0912 17:35:19.642581 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.642649 kubelet[2724]: E0912 17:35:19.642613 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.642863 kubelet[2724]: E0912 17:35:19.642837 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.642863 kubelet[2724]: W0912 17:35:19.642855 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.642915 kubelet[2724]: E0912 17:35:19.642876 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.643112 kubelet[2724]: E0912 17:35:19.643095 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.643139 kubelet[2724]: W0912 17:35:19.643110 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.643139 kubelet[2724]: E0912 17:35:19.643129 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.643462 kubelet[2724]: E0912 17:35:19.643444 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.643462 kubelet[2724]: W0912 17:35:19.643460 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.643525 kubelet[2724]: E0912 17:35:19.643494 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.643694 kubelet[2724]: E0912 17:35:19.643677 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.643694 kubelet[2724]: W0912 17:35:19.643693 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.643750 kubelet[2724]: E0912 17:35:19.643723 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.643946 kubelet[2724]: E0912 17:35:19.643917 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.643946 kubelet[2724]: W0912 17:35:19.643934 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.644118 kubelet[2724]: E0912 17:35:19.644055 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.644231 kubelet[2724]: E0912 17:35:19.644211 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.644277 kubelet[2724]: W0912 17:35:19.644237 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.644323 kubelet[2724]: E0912 17:35:19.644276 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.644611 kubelet[2724]: E0912 17:35:19.644580 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.644611 kubelet[2724]: W0912 17:35:19.644597 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.644683 kubelet[2724]: E0912 17:35:19.644620 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.644934 kubelet[2724]: E0912 17:35:19.644915 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.644934 kubelet[2724]: W0912 17:35:19.644931 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.644998 kubelet[2724]: E0912 17:35:19.644953 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.645248 kubelet[2724]: E0912 17:35:19.645222 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.645248 kubelet[2724]: W0912 17:35:19.645239 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.645311 kubelet[2724]: E0912 17:35:19.645257 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.645592 kubelet[2724]: E0912 17:35:19.645572 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.645592 kubelet[2724]: W0912 17:35:19.645588 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.645662 kubelet[2724]: E0912 17:35:19.645617 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.645824 kubelet[2724]: E0912 17:35:19.645807 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.645824 kubelet[2724]: W0912 17:35:19.645822 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.645898 kubelet[2724]: E0912 17:35:19.645847 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.646041 kubelet[2724]: E0912 17:35:19.646024 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.646041 kubelet[2724]: W0912 17:35:19.646039 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.646116 kubelet[2724]: E0912 17:35:19.646063 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.646300 kubelet[2724]: E0912 17:35:19.646273 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.646334 kubelet[2724]: W0912 17:35:19.646300 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.646334 kubelet[2724]: E0912 17:35:19.646328 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.646570 kubelet[2724]: E0912 17:35:19.646552 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.646570 kubelet[2724]: W0912 17:35:19.646568 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.646634 kubelet[2724]: E0912 17:35:19.646587 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.646906 kubelet[2724]: E0912 17:35:19.646885 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.646906 kubelet[2724]: W0912 17:35:19.646899 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.646996 kubelet[2724]: E0912 17:35:19.646915 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.647121 kubelet[2724]: E0912 17:35:19.647106 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.647121 kubelet[2724]: W0912 17:35:19.647116 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.647189 kubelet[2724]: E0912 17:35:19.647130 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.647418 kubelet[2724]: E0912 17:35:19.647388 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.647418 kubelet[2724]: W0912 17:35:19.647417 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.647489 kubelet[2724]: E0912 17:35:19.647435 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.647723 kubelet[2724]: E0912 17:35:19.647706 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.647723 kubelet[2724]: W0912 17:35:19.647718 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.647784 kubelet[2724]: E0912 17:35:19.647732 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.648013 kubelet[2724]: E0912 17:35:19.647997 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.648013 kubelet[2724]: W0912 17:35:19.648009 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.648065 kubelet[2724]: E0912 17:35:19.648023 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.648266 kubelet[2724]: E0912 17:35:19.648249 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.648266 kubelet[2724]: W0912 17:35:19.648261 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.648339 kubelet[2724]: E0912 17:35:19.648270 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:19.694956 kubelet[2724]: E0912 17:35:19.694917 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:19.694956 kubelet[2724]: W0912 17:35:19.694947 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:19.695139 kubelet[2724]: E0912 17:35:19.694978 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:21.421614 kubelet[2724]: E0912 17:35:21.420530 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:21.736592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947386362.mount: Deactivated successfully. Sep 12 17:35:22.493235 containerd[1601]: time="2025-09-12T17:35:22.493160509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:22.494540 containerd[1601]: time="2025-09-12T17:35:22.494264965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:35:22.496100 containerd[1601]: time="2025-09-12T17:35:22.496049749Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:22.498890 containerd[1601]: time="2025-09-12T17:35:22.498850043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:22.499520 containerd[1601]: time="2025-09-12T17:35:22.499470779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.258450352s" Sep 12 17:35:22.499520 containerd[1601]: time="2025-09-12T17:35:22.499519220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:35:22.501082 containerd[1601]: time="2025-09-12T17:35:22.500503781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:35:22.510210 containerd[1601]: time="2025-09-12T17:35:22.510160802Z" level=info msg="CreateContainer within sandbox \"790536c370a3300e87be600a97ffbc3a604c0c6d68841752bda66c44b498806d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:35:22.532730 containerd[1601]: time="2025-09-12T17:35:22.532686159Z" level=info msg="CreateContainer within sandbox \"790536c370a3300e87be600a97ffbc3a604c0c6d68841752bda66c44b498806d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"95f0a10fae559fd225cb0ee691697630fa79cf1eb3a14330b32bfb07ee8aac7e\"" Sep 12 17:35:22.533352 containerd[1601]: time="2025-09-12T17:35:22.533299692Z" level=info msg="StartContainer for \"95f0a10fae559fd225cb0ee691697630fa79cf1eb3a14330b32bfb07ee8aac7e\"" Sep 12 17:35:22.609314 containerd[1601]: time="2025-09-12T17:35:22.609262677Z" level=info msg="StartContainer for \"95f0a10fae559fd225cb0ee691697630fa79cf1eb3a14330b32bfb07ee8aac7e\" returns successfully" Sep 12 17:35:23.421392 kubelet[2724]: E0912 17:35:23.421312 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:23.499165 kubelet[2724]: E0912 17:35:23.499122 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:23.517249 kubelet[2724]: I0912 17:35:23.517156 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74b5d6b5b4-5ncmx" podStartSLOduration=2.257399275 podStartE2EDuration="5.517136775s" podCreationTimestamp="2025-09-12 17:35:18 +0000 UTC" firstStartedPulling="2025-09-12 17:35:19.240588476 +0000 UTC m=+18.921190111" lastFinishedPulling="2025-09-12 17:35:22.500325956 +0000 UTC m=+22.180927611" observedRunningTime="2025-09-12 17:35:23.516890903 +0000 UTC m=+23.197492538" watchObservedRunningTime="2025-09-12 17:35:23.517136775 +0000 UTC m=+23.197738410" Sep 12 17:35:23.561107 kubelet[2724]: E0912 17:35:23.561068 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.561107 kubelet[2724]: W0912 17:35:23.561095 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.561107 kubelet[2724]: E0912 17:35:23.561121 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.561440 kubelet[2724]: E0912 17:35:23.561422 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.561440 kubelet[2724]: W0912 17:35:23.561437 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.561542 kubelet[2724]: E0912 17:35:23.561449 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.561704 kubelet[2724]: E0912 17:35:23.561678 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.561704 kubelet[2724]: W0912 17:35:23.561691 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.561704 kubelet[2724]: E0912 17:35:23.561701 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.561910 kubelet[2724]: E0912 17:35:23.561896 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.561910 kubelet[2724]: W0912 17:35:23.561907 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.561959 kubelet[2724]: E0912 17:35:23.561916 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.562120 kubelet[2724]: E0912 17:35:23.562106 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.562120 kubelet[2724]: W0912 17:35:23.562116 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.562260 kubelet[2724]: E0912 17:35:23.562125 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.562344 kubelet[2724]: E0912 17:35:23.562326 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.562344 kubelet[2724]: W0912 17:35:23.562338 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.562444 kubelet[2724]: E0912 17:35:23.562348 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.562619 kubelet[2724]: E0912 17:35:23.562600 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.562619 kubelet[2724]: W0912 17:35:23.562612 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.562677 kubelet[2724]: E0912 17:35:23.562621 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.562850 kubelet[2724]: E0912 17:35:23.562832 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.562850 kubelet[2724]: W0912 17:35:23.562845 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.562899 kubelet[2724]: E0912 17:35:23.562853 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.563070 kubelet[2724]: E0912 17:35:23.563053 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.563070 kubelet[2724]: W0912 17:35:23.563065 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.563125 kubelet[2724]: E0912 17:35:23.563072 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.563295 kubelet[2724]: E0912 17:35:23.563271 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.563295 kubelet[2724]: W0912 17:35:23.563286 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.563391 kubelet[2724]: E0912 17:35:23.563299 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.563617 kubelet[2724]: E0912 17:35:23.563597 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.563617 kubelet[2724]: W0912 17:35:23.563609 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.563617 kubelet[2724]: E0912 17:35:23.563619 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.563823 kubelet[2724]: E0912 17:35:23.563807 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.563823 kubelet[2724]: W0912 17:35:23.563818 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.563872 kubelet[2724]: E0912 17:35:23.563825 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.564043 kubelet[2724]: E0912 17:35:23.564026 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.564043 kubelet[2724]: W0912 17:35:23.564037 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.564097 kubelet[2724]: E0912 17:35:23.564045 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.564287 kubelet[2724]: E0912 17:35:23.564263 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.564287 kubelet[2724]: W0912 17:35:23.564277 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.564287 kubelet[2724]: E0912 17:35:23.564288 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.564553 kubelet[2724]: E0912 17:35:23.564538 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.564553 kubelet[2724]: W0912 17:35:23.564550 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.564616 kubelet[2724]: E0912 17:35:23.564559 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.570040 kubelet[2724]: E0912 17:35:23.570009 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.570040 kubelet[2724]: W0912 17:35:23.570032 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.570130 kubelet[2724]: E0912 17:35:23.570056 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.570448 kubelet[2724]: E0912 17:35:23.570428 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.570448 kubelet[2724]: W0912 17:35:23.570446 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.570544 kubelet[2724]: E0912 17:35:23.570465 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.570788 kubelet[2724]: E0912 17:35:23.570757 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.570788 kubelet[2724]: W0912 17:35:23.570779 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.570850 kubelet[2724]: E0912 17:35:23.570802 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.571047 kubelet[2724]: E0912 17:35:23.571023 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.571047 kubelet[2724]: W0912 17:35:23.571035 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.571110 kubelet[2724]: E0912 17:35:23.571050 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.571264 kubelet[2724]: E0912 17:35:23.571241 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.571264 kubelet[2724]: W0912 17:35:23.571253 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.571327 kubelet[2724]: E0912 17:35:23.571267 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.571552 kubelet[2724]: E0912 17:35:23.571530 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.571552 kubelet[2724]: W0912 17:35:23.571547 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.571615 kubelet[2724]: E0912 17:35:23.571564 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.571797 kubelet[2724]: E0912 17:35:23.571785 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.571797 kubelet[2724]: W0912 17:35:23.571795 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.571853 kubelet[2724]: E0912 17:35:23.571829 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.572004 kubelet[2724]: E0912 17:35:23.571991 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.572034 kubelet[2724]: W0912 17:35:23.572002 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.572066 kubelet[2724]: E0912 17:35:23.572032 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.572293 kubelet[2724]: E0912 17:35:23.572273 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.572293 kubelet[2724]: W0912 17:35:23.572286 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.572353 kubelet[2724]: E0912 17:35:23.572303 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.572739 kubelet[2724]: E0912 17:35:23.572719 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.572739 kubelet[2724]: W0912 17:35:23.572734 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.572819 kubelet[2724]: E0912 17:35:23.572751 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.573012 kubelet[2724]: E0912 17:35:23.572995 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.573012 kubelet[2724]: W0912 17:35:23.573008 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.573072 kubelet[2724]: E0912 17:35:23.573026 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.573336 kubelet[2724]: E0912 17:35:23.573310 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.573336 kubelet[2724]: W0912 17:35:23.573327 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.573402 kubelet[2724]: E0912 17:35:23.573366 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.573631 kubelet[2724]: E0912 17:35:23.573612 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.573631 kubelet[2724]: W0912 17:35:23.573626 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.573701 kubelet[2724]: E0912 17:35:23.573660 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.573878 kubelet[2724]: E0912 17:35:23.573852 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.573878 kubelet[2724]: W0912 17:35:23.573867 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.573942 kubelet[2724]: E0912 17:35:23.573893 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.574103 kubelet[2724]: E0912 17:35:23.574086 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.574103 kubelet[2724]: W0912 17:35:23.574099 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.574161 kubelet[2724]: E0912 17:35:23.574116 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.574367 kubelet[2724]: E0912 17:35:23.574350 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.574367 kubelet[2724]: W0912 17:35:23.574365 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.574451 kubelet[2724]: E0912 17:35:23.574378 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.574673 kubelet[2724]: E0912 17:35:23.574659 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.574703 kubelet[2724]: W0912 17:35:23.574672 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.574703 kubelet[2724]: E0912 17:35:23.574685 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.575171 kubelet[2724]: E0912 17:35:23.575150 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.575171 kubelet[2724]: W0912 17:35:23.575164 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.575245 kubelet[2724]: E0912 17:35:23.575175 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.169305 containerd[1601]: time="2025-09-12T17:35:24.169249110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:24.170061 containerd[1601]: time="2025-09-12T17:35:24.169995582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:35:24.171361 containerd[1601]: time="2025-09-12T17:35:24.171321394Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:24.173809 containerd[1601]: time="2025-09-12T17:35:24.173780875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:24.174504 containerd[1601]: time="2025-09-12T17:35:24.174454230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.673914071s" Sep 12 17:35:24.174504 containerd[1601]: time="2025-09-12T17:35:24.174488725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:35:24.176703 containerd[1601]: time="2025-09-12T17:35:24.176672499Z" level=info msg="CreateContainer within sandbox \"1e27fb40583a7c7fbf1640e7718d73e9e66f9ad66150ac69294c3660d4ba8aa2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:35:24.197184 containerd[1601]: time="2025-09-12T17:35:24.197125771Z" level=info msg="CreateContainer within sandbox \"1e27fb40583a7c7fbf1640e7718d73e9e66f9ad66150ac69294c3660d4ba8aa2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e493435e60ef692555f514d540ac6aefb9c95f58daa876b49d291ccb1f7aff65\"" Sep 12 17:35:24.197863 containerd[1601]: time="2025-09-12T17:35:24.197821499Z" level=info msg="StartContainer for \"e493435e60ef692555f514d540ac6aefb9c95f58daa876b49d291ccb1f7aff65\"" Sep 12 17:35:24.759378 containerd[1601]: time="2025-09-12T17:35:24.759214237Z" level=info msg="StartContainer for \"e493435e60ef692555f514d540ac6aefb9c95f58daa876b49d291ccb1f7aff65\" returns successfully" Sep 12 17:35:24.762707 kubelet[2724]: I0912 17:35:24.762664 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:24.763231 kubelet[2724]: E0912 17:35:24.763159 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:24.783185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e493435e60ef692555f514d540ac6aefb9c95f58daa876b49d291ccb1f7aff65-rootfs.mount: Deactivated successfully. Sep 12 17:35:24.790867 containerd[1601]: time="2025-09-12T17:35:24.789047285Z" level=info msg="shim disconnected" id=e493435e60ef692555f514d540ac6aefb9c95f58daa876b49d291ccb1f7aff65 namespace=k8s.io Sep 12 17:35:24.790867 containerd[1601]: time="2025-09-12T17:35:24.790831197Z" level=warning msg="cleaning up after shim disconnected" id=e493435e60ef692555f514d540ac6aefb9c95f58daa876b49d291ccb1f7aff65 namespace=k8s.io Sep 12 17:35:24.790867 containerd[1601]: time="2025-09-12T17:35:24.790847217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:25.420830 kubelet[2724]: E0912 17:35:25.420749 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:25.767518 containerd[1601]: time="2025-09-12T17:35:25.766561675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:35:27.420760 kubelet[2724]: E0912 17:35:27.420670 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:29.420704 kubelet[2724]: E0912 17:35:29.420594 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:30.239638 containerd[1601]: time="2025-09-12T17:35:30.239576118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:30.240763 containerd[1601]: time="2025-09-12T17:35:30.240715216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:35:30.242426 containerd[1601]: time="2025-09-12T17:35:30.242350949Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:30.245198 containerd[1601]: time="2025-09-12T17:35:30.245166777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:30.246237 containerd[1601]: time="2025-09-12T17:35:30.246077958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.479472041s" Sep 12 17:35:30.246237 containerd[1601]: time="2025-09-12T17:35:30.246122302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:35:30.248294 containerd[1601]: time="2025-09-12T17:35:30.248258433Z" level=info msg="CreateContainer within sandbox \"1e27fb40583a7c7fbf1640e7718d73e9e66f9ad66150ac69294c3660d4ba8aa2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:35:30.266515 containerd[1601]: time="2025-09-12T17:35:30.266476306Z" level=info msg="CreateContainer within sandbox \"1e27fb40583a7c7fbf1640e7718d73e9e66f9ad66150ac69294c3660d4ba8aa2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba61362a3669caa61e5c9aaa041df956a41f5c30d7fd993207bf7f129f02c560\"" Sep 12 17:35:30.267248 containerd[1601]: time="2025-09-12T17:35:30.267176852Z" level=info msg="StartContainer for \"ba61362a3669caa61e5c9aaa041df956a41f5c30d7fd993207bf7f129f02c560\"" Sep 12 17:35:30.341772 containerd[1601]: time="2025-09-12T17:35:30.341713701Z" level=info msg="StartContainer for \"ba61362a3669caa61e5c9aaa041df956a41f5c30d7fd993207bf7f129f02c560\" returns successfully" Sep 12 17:35:31.421011 kubelet[2724]: E0912 17:35:31.420940 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:33.006945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba61362a3669caa61e5c9aaa041df956a41f5c30d7fd993207bf7f129f02c560-rootfs.mount: Deactivated successfully. Sep 12 17:35:33.013808 containerd[1601]: time="2025-09-12T17:35:33.013739015Z" level=info msg="shim disconnected" id=ba61362a3669caa61e5c9aaa041df956a41f5c30d7fd993207bf7f129f02c560 namespace=k8s.io Sep 12 17:35:33.013808 containerd[1601]: time="2025-09-12T17:35:33.013803527Z" level=warning msg="cleaning up after shim disconnected" id=ba61362a3669caa61e5c9aaa041df956a41f5c30d7fd993207bf7f129f02c560 namespace=k8s.io Sep 12 17:35:33.014316 containerd[1601]: time="2025-09-12T17:35:33.013814027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:33.062723 kubelet[2724]: I0912 17:35:33.062601 2724 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:35:33.240568 kubelet[2724]: I0912 17:35:33.240459 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69s6h\" (UniqueName: \"kubernetes.io/projected/e7b2ca51-b8fe-48dd-93cf-b98746e57dea-kube-api-access-69s6h\") pod \"calico-apiserver-689c48fdcf-qf7qr\" (UID: \"e7b2ca51-b8fe-48dd-93cf-b98746e57dea\") " pod="calico-apiserver/calico-apiserver-689c48fdcf-qf7qr" Sep 12 17:35:33.240568 kubelet[2724]: I0912 17:35:33.240531 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22dbceb8-3888-4c97-ae99-1a48c9c8116f-tigera-ca-bundle\") pod \"calico-kube-controllers-65ddc98f95-bgcp8\" (UID: \"22dbceb8-3888-4c97-ae99-1a48c9c8116f\") " pod="calico-system/calico-kube-controllers-65ddc98f95-bgcp8" Sep 12 17:35:33.240568 kubelet[2724]: I0912 17:35:33.240553 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5w98\" (UniqueName: \"kubernetes.io/projected/22dbceb8-3888-4c97-ae99-1a48c9c8116f-kube-api-access-z5w98\") pod \"calico-kube-controllers-65ddc98f95-bgcp8\" (UID: \"22dbceb8-3888-4c97-ae99-1a48c9c8116f\") " pod="calico-system/calico-kube-controllers-65ddc98f95-bgcp8" Sep 12 17:35:33.240568 kubelet[2724]: I0912 17:35:33.240586 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-backend-key-pair\") pod \"whisker-5c45c567bb-fg9p6\" (UID: \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\") " pod="calico-system/whisker-5c45c567bb-fg9p6" Sep 12 17:35:33.240568 kubelet[2724]: I0912 17:35:33.240607 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c01b22f9-f886-420b-b430-8234a928fb35-config-volume\") pod \"coredns-7c65d6cfc9-kvzhb\" (UID: \"c01b22f9-f886-420b-b430-8234a928fb35\") " pod="kube-system/coredns-7c65d6cfc9-kvzhb" Sep 12 17:35:33.241023 kubelet[2724]: I0912 17:35:33.240630 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/470162f6-a6a8-4102-8800-2ba86b4652d8-config-volume\") pod \"coredns-7c65d6cfc9-x5bfz\" (UID: \"470162f6-a6a8-4102-8800-2ba86b4652d8\") " pod="kube-system/coredns-7c65d6cfc9-x5bfz" Sep 12 17:35:33.241023 kubelet[2724]: I0912 17:35:33.240653 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8xcn\" (UniqueName: \"kubernetes.io/projected/6d962edc-2c93-4f13-a932-db0a9095910d-kube-api-access-k8xcn\") pod \"goldmane-7988f88666-qcj2j\" (UID: \"6d962edc-2c93-4f13-a932-db0a9095910d\") " pod="calico-system/goldmane-7988f88666-qcj2j" Sep 12 17:35:33.241023 kubelet[2724]: I0912 17:35:33.240675 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcb2s\" (UniqueName: \"kubernetes.io/projected/c01b22f9-f886-420b-b430-8234a928fb35-kube-api-access-rcb2s\") pod \"coredns-7c65d6cfc9-kvzhb\" (UID: \"c01b22f9-f886-420b-b430-8234a928fb35\") " pod="kube-system/coredns-7c65d6cfc9-kvzhb" Sep 12 17:35:33.241023 kubelet[2724]: I0912 17:35:33.240772 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxtp\" (UniqueName: \"kubernetes.io/projected/470162f6-a6a8-4102-8800-2ba86b4652d8-kube-api-access-5lxtp\") pod \"coredns-7c65d6cfc9-x5bfz\" (UID: \"470162f6-a6a8-4102-8800-2ba86b4652d8\") " pod="kube-system/coredns-7c65d6cfc9-x5bfz" Sep 12 17:35:33.241023 kubelet[2724]: I0912 17:35:33.240818 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d962edc-2c93-4f13-a932-db0a9095910d-goldmane-ca-bundle\") pod \"goldmane-7988f88666-qcj2j\" (UID: \"6d962edc-2c93-4f13-a932-db0a9095910d\") " pod="calico-system/goldmane-7988f88666-qcj2j" Sep 12 17:35:33.241212 kubelet[2724]: I0912 17:35:33.240841 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnmq\" (UniqueName: \"kubernetes.io/projected/30910c1c-2d0b-4258-860d-9f0d09a1d5af-kube-api-access-ncnmq\") pod \"calico-apiserver-689c48fdcf-95pkx\" (UID: \"30910c1c-2d0b-4258-860d-9f0d09a1d5af\") " pod="calico-apiserver/calico-apiserver-689c48fdcf-95pkx" Sep 12 17:35:33.241212 kubelet[2724]: I0912 17:35:33.240865 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6d962edc-2c93-4f13-a932-db0a9095910d-goldmane-key-pair\") pod \"goldmane-7988f88666-qcj2j\" (UID: \"6d962edc-2c93-4f13-a932-db0a9095910d\") " pod="calico-system/goldmane-7988f88666-qcj2j" Sep 12 17:35:33.241212 kubelet[2724]: I0912 17:35:33.240883 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgm89\" (UniqueName: \"kubernetes.io/projected/eaa01bb9-7882-4ad4-ae32-48a47624b31c-kube-api-access-sgm89\") pod \"whisker-5c45c567bb-fg9p6\" (UID: \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\") " pod="calico-system/whisker-5c45c567bb-fg9p6" Sep 12 17:35:33.241212 kubelet[2724]: I0912 17:35:33.240900 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7b2ca51-b8fe-48dd-93cf-b98746e57dea-calico-apiserver-certs\") pod \"calico-apiserver-689c48fdcf-qf7qr\" (UID: \"e7b2ca51-b8fe-48dd-93cf-b98746e57dea\") " pod="calico-apiserver/calico-apiserver-689c48fdcf-qf7qr" Sep 12 17:35:33.241212 kubelet[2724]: I0912 17:35:33.240924 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d962edc-2c93-4f13-a932-db0a9095910d-config\") pod \"goldmane-7988f88666-qcj2j\" (UID: \"6d962edc-2c93-4f13-a932-db0a9095910d\") " pod="calico-system/goldmane-7988f88666-qcj2j" Sep 12 17:35:33.241384 kubelet[2724]: I0912 17:35:33.240989 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-ca-bundle\") pod \"whisker-5c45c567bb-fg9p6\" (UID: \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\") " pod="calico-system/whisker-5c45c567bb-fg9p6" Sep 12 17:35:33.241384 kubelet[2724]: I0912 17:35:33.241031 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/30910c1c-2d0b-4258-860d-9f0d09a1d5af-calico-apiserver-certs\") pod \"calico-apiserver-689c48fdcf-95pkx\" (UID: \"30910c1c-2d0b-4258-860d-9f0d09a1d5af\") " pod="calico-apiserver/calico-apiserver-689c48fdcf-95pkx" Sep 12 17:35:33.403392 containerd[1601]: time="2025-09-12T17:35:33.403339928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qcj2j,Uid:6d962edc-2c93-4f13-a932-db0a9095910d,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:33.405992 kubelet[2724]: E0912 17:35:33.405721 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:33.406475 containerd[1601]: time="2025-09-12T17:35:33.406445309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvzhb,Uid:c01b22f9-f886-420b-b430-8234a928fb35,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:33.420378 containerd[1601]: time="2025-09-12T17:35:33.420327674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-qf7qr,Uid:e7b2ca51-b8fe-48dd-93cf-b98746e57dea,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:35:33.423077 kubelet[2724]: E0912 17:35:33.423042 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:33.424052 containerd[1601]: time="2025-09-12T17:35:33.423728569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c45c567bb-fg9p6,Uid:eaa01bb9-7882-4ad4-ae32-48a47624b31c,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:33.424052 containerd[1601]: time="2025-09-12T17:35:33.423929957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5bfz,Uid:470162f6-a6a8-4102-8800-2ba86b4652d8,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:33.424154 containerd[1601]: time="2025-09-12T17:35:33.424074960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pg5rh,Uid:45ffb3eb-a3d1-424f-9934-ed6fe54575da,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:33.424775 containerd[1601]: time="2025-09-12T17:35:33.424743204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65ddc98f95-bgcp8,Uid:22dbceb8-3888-4c97-ae99-1a48c9c8116f,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:33.427658 containerd[1601]: time="2025-09-12T17:35:33.427629814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-95pkx,Uid:30910c1c-2d0b-4258-860d-9f0d09a1d5af,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:35:33.882610 containerd[1601]: time="2025-09-12T17:35:33.882574726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:35:33.973306 containerd[1601]: time="2025-09-12T17:35:33.973246500Z" level=error msg="Failed to destroy network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.973991 containerd[1601]: time="2025-09-12T17:35:33.973958658Z" level=error msg="encountered an error cleaning up failed sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.974130 containerd[1601]: time="2025-09-12T17:35:33.974093220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pg5rh,Uid:45ffb3eb-a3d1-424f-9934-ed6fe54575da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.989335 containerd[1601]: time="2025-09-12T17:35:33.989259997Z" level=error msg="Failed to destroy network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.991171 containerd[1601]: time="2025-09-12T17:35:33.991135959Z" level=error msg="encountered an error cleaning up failed sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.991228 containerd[1601]: time="2025-09-12T17:35:33.991202534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-qf7qr,Uid:e7b2ca51-b8fe-48dd-93cf-b98746e57dea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.996455 containerd[1601]: time="2025-09-12T17:35:33.996245864Z" level=error msg="Failed to destroy network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.997016 containerd[1601]: time="2025-09-12T17:35:33.996968691Z" level=error msg="encountered an error cleaning up failed sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.997150 containerd[1601]: time="2025-09-12T17:35:33.997117029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-95pkx,Uid:30910c1c-2d0b-4258-860d-9f0d09a1d5af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.997461 kubelet[2724]: E0912 17:35:33.997398 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.997550 kubelet[2724]: E0912 17:35:33.997505 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pg5rh" Sep 12 17:35:33.997550 kubelet[2724]: E0912 17:35:33.997530 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pg5rh" Sep 12 17:35:33.997725 kubelet[2724]: E0912 17:35:33.997580 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pg5rh_calico-system(45ffb3eb-a3d1-424f-9934-ed6fe54575da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pg5rh_calico-system(45ffb3eb-a3d1-424f-9934-ed6fe54575da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:33.998700 containerd[1601]: time="2025-09-12T17:35:33.998625632Z" level=error msg="Failed to destroy network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.998795 kubelet[2724]: E0912 17:35:33.997312 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.998868 kubelet[2724]: E0912 17:35:33.998805 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-689c48fdcf-qf7qr" Sep 12 17:35:33.998868 kubelet[2724]: E0912 17:35:33.998822 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-689c48fdcf-qf7qr" Sep 12 17:35:33.998868 kubelet[2724]: E0912 17:35:33.998847 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-689c48fdcf-qf7qr_calico-apiserver(e7b2ca51-b8fe-48dd-93cf-b98746e57dea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-689c48fdcf-qf7qr_calico-apiserver(e7b2ca51-b8fe-48dd-93cf-b98746e57dea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-689c48fdcf-qf7qr" podUID="e7b2ca51-b8fe-48dd-93cf-b98746e57dea" Sep 12 17:35:33.999076 kubelet[2724]: E0912 17:35:33.998916 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:33.999076 kubelet[2724]: E0912 17:35:33.998951 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-689c48fdcf-95pkx" Sep 12 17:35:33.999076 kubelet[2724]: E0912 17:35:33.998966 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-689c48fdcf-95pkx" Sep 12 17:35:33.999165 kubelet[2724]: E0912 17:35:33.998987 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-689c48fdcf-95pkx_calico-apiserver(30910c1c-2d0b-4258-860d-9f0d09a1d5af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-689c48fdcf-95pkx_calico-apiserver(30910c1c-2d0b-4258-860d-9f0d09a1d5af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-689c48fdcf-95pkx" podUID="30910c1c-2d0b-4258-860d-9f0d09a1d5af" Sep 12 17:35:34.000000 containerd[1601]: time="2025-09-12T17:35:33.999967161Z" level=error msg="encountered an error cleaning up failed sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.000203 containerd[1601]: time="2025-09-12T17:35:34.000168799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvzhb,Uid:c01b22f9-f886-420b-b430-8234a928fb35,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.000515 kubelet[2724]: E0912 17:35:34.000490 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.000578 kubelet[2724]: E0912 17:35:34.000522 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kvzhb" Sep 12 17:35:34.000578 kubelet[2724]: E0912 17:35:34.000539 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kvzhb" Sep 12 17:35:34.000646 kubelet[2724]: E0912 17:35:34.000613 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kvzhb_kube-system(c01b22f9-f886-420b-b430-8234a928fb35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kvzhb_kube-system(c01b22f9-f886-420b-b430-8234a928fb35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kvzhb" podUID="c01b22f9-f886-420b-b430-8234a928fb35" Sep 12 17:35:34.022529 containerd[1601]: time="2025-09-12T17:35:34.022380561Z" level=error msg="Failed to destroy network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.027011 containerd[1601]: time="2025-09-12T17:35:34.025584056Z" level=error msg="Failed to destroy network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.027011 containerd[1601]: time="2025-09-12T17:35:34.025767400Z" level=error msg="encountered an error cleaning up failed sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.027011 containerd[1601]: time="2025-09-12T17:35:34.025815640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c45c567bb-fg9p6,Uid:eaa01bb9-7882-4ad4-ae32-48a47624b31c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.027011 containerd[1601]: time="2025-09-12T17:35:34.026747029Z" level=error msg="encountered an error cleaning up failed sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.026508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b-shm.mount: Deactivated successfully. Sep 12 17:35:34.031342 containerd[1601]: time="2025-09-12T17:35:34.030693539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65ddc98f95-bgcp8,Uid:22dbceb8-3888-4c97-ae99-1a48c9c8116f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031342 containerd[1601]: time="2025-09-12T17:35:34.030875210Z" level=error msg="Failed to destroy network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031342 containerd[1601]: time="2025-09-12T17:35:34.030901298Z" level=error msg="Failed to destroy network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031342 containerd[1601]: time="2025-09-12T17:35:34.031249252Z" level=error msg="encountered an error cleaning up failed sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031342 containerd[1601]: time="2025-09-12T17:35:34.031289237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5bfz,Uid:470162f6-a6a8-4102-8800-2ba86b4652d8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd-shm.mount: Deactivated successfully. Sep 12 17:35:34.031619 kubelet[2724]: E0912 17:35:34.031510 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031619 kubelet[2724]: E0912 17:35:34.031564 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x5bfz" Sep 12 17:35:34.031619 kubelet[2724]: E0912 17:35:34.031584 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x5bfz" Sep 12 17:35:34.031718 containerd[1601]: time="2025-09-12T17:35:34.031343278Z" level=error msg="encountered an error cleaning up failed sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031718 containerd[1601]: time="2025-09-12T17:35:34.031396588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qcj2j,Uid:6d962edc-2c93-4f13-a932-db0a9095910d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.031802 kubelet[2724]: E0912 17:35:34.031622 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-x5bfz_kube-system(470162f6-a6a8-4102-8800-2ba86b4652d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-x5bfz_kube-system(470162f6-a6a8-4102-8800-2ba86b4652d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-x5bfz" podUID="470162f6-a6a8-4102-8800-2ba86b4652d8" Sep 12 17:35:34.035424 kubelet[2724]: E0912 17:35:34.033535 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.035424 kubelet[2724]: E0912 17:35:34.033570 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-qcj2j" Sep 12 17:35:34.035424 kubelet[2724]: E0912 17:35:34.033606 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-qcj2j" Sep 12 17:35:34.035553 kubelet[2724]: E0912 17:35:34.033637 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-qcj2j_calico-system(6d962edc-2c93-4f13-a932-db0a9095910d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-qcj2j_calico-system(6d962edc-2c93-4f13-a932-db0a9095910d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-qcj2j" podUID="6d962edc-2c93-4f13-a932-db0a9095910d" Sep 12 17:35:34.035553 kubelet[2724]: E0912 17:35:34.033683 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.035553 kubelet[2724]: E0912 17:35:34.033702 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c45c567bb-fg9p6" Sep 12 17:35:34.035656 kubelet[2724]: E0912 17:35:34.033715 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c45c567bb-fg9p6" Sep 12 17:35:34.035656 kubelet[2724]: E0912 17:35:34.033735 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c45c567bb-fg9p6_calico-system(eaa01bb9-7882-4ad4-ae32-48a47624b31c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c45c567bb-fg9p6_calico-system(eaa01bb9-7882-4ad4-ae32-48a47624b31c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c45c567bb-fg9p6" podUID="eaa01bb9-7882-4ad4-ae32-48a47624b31c" Sep 12 17:35:34.035656 kubelet[2724]: E0912 17:35:34.034876 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.035772 kubelet[2724]: E0912 17:35:34.034973 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65ddc98f95-bgcp8" Sep 12 17:35:34.035772 kubelet[2724]: E0912 17:35:34.034993 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65ddc98f95-bgcp8" Sep 12 17:35:34.035772 kubelet[2724]: E0912 17:35:34.035033 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65ddc98f95-bgcp8_calico-system(22dbceb8-3888-4c97-ae99-1a48c9c8116f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65ddc98f95-bgcp8_calico-system(22dbceb8-3888-4c97-ae99-1a48c9c8116f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65ddc98f95-bgcp8" podUID="22dbceb8-3888-4c97-ae99-1a48c9c8116f" Sep 12 17:35:34.036214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad-shm.mount: Deactivated successfully. Sep 12 17:35:34.036495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614-shm.mount: Deactivated successfully. Sep 12 17:35:34.882346 kubelet[2724]: I0912 17:35:34.882299 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:35:34.883590 kubelet[2724]: I0912 17:35:34.883111 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:35:34.885039 kubelet[2724]: I0912 17:35:34.884977 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:35:34.889445 kubelet[2724]: I0912 17:35:34.889254 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:35:34.910204 containerd[1601]: time="2025-09-12T17:35:34.910122655Z" level=info msg="StopPodSandbox for \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\"" Sep 12 17:35:34.910980 kubelet[2724]: I0912 17:35:34.910582 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:35:34.911110 containerd[1601]: time="2025-09-12T17:35:34.911038335Z" level=info msg="StopPodSandbox for \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\"" Sep 12 17:35:34.911276 containerd[1601]: time="2025-09-12T17:35:34.911242909Z" level=info msg="StopPodSandbox for \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\"" Sep 12 17:35:34.912433 containerd[1601]: time="2025-09-12T17:35:34.912178405Z" level=info msg="Ensure that sandbox 3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614 in task-service has been cleanup successfully" Sep 12 17:35:34.912433 containerd[1601]: time="2025-09-12T17:35:34.912216427Z" level=info msg="StopPodSandbox for \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\"" Sep 12 17:35:34.912615 containerd[1601]: time="2025-09-12T17:35:34.912182233Z" level=info msg="Ensure that sandbox 06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0 in task-service has been cleanup successfully" Sep 12 17:35:34.912823 containerd[1601]: time="2025-09-12T17:35:34.912801525Z" level=info msg="Ensure that sandbox e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b in task-service has been cleanup successfully" Sep 12 17:35:34.913140 containerd[1601]: time="2025-09-12T17:35:34.913106307Z" level=info msg="StopPodSandbox for \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\"" Sep 12 17:35:34.914037 containerd[1601]: time="2025-09-12T17:35:34.913267400Z" level=info msg="Ensure that sandbox 7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad in task-service has been cleanup successfully" Sep 12 17:35:34.919211 containerd[1601]: time="2025-09-12T17:35:34.912184447Z" level=info msg="Ensure that sandbox f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176 in task-service has been cleanup successfully" Sep 12 17:35:34.919673 kubelet[2724]: I0912 17:35:34.919636 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:35:34.922768 containerd[1601]: time="2025-09-12T17:35:34.922650706Z" level=info msg="StopPodSandbox for \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\"" Sep 12 17:35:34.923073 containerd[1601]: time="2025-09-12T17:35:34.923046931Z" level=info msg="Ensure that sandbox 1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151 in task-service has been cleanup successfully" Sep 12 17:35:34.925967 kubelet[2724]: I0912 17:35:34.925933 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:35:34.928091 containerd[1601]: time="2025-09-12T17:35:34.928057117Z" level=info msg="StopPodSandbox for \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\"" Sep 12 17:35:34.930180 containerd[1601]: time="2025-09-12T17:35:34.929956443Z" level=info msg="Ensure that sandbox 6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd in task-service has been cleanup successfully" Sep 12 17:35:34.932715 kubelet[2724]: I0912 17:35:34.932687 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:35:34.933542 containerd[1601]: time="2025-09-12T17:35:34.933399517Z" level=info msg="StopPodSandbox for \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\"" Sep 12 17:35:34.933993 containerd[1601]: time="2025-09-12T17:35:34.933763070Z" level=info msg="Ensure that sandbox 88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801 in task-service has been cleanup successfully" Sep 12 17:35:34.969651 containerd[1601]: time="2025-09-12T17:35:34.969591637Z" level=error msg="StopPodSandbox for \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\" failed" error="failed to destroy network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.970141 kubelet[2724]: E0912 17:35:34.970097 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:35:34.970297 kubelet[2724]: E0912 17:35:34.970213 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614"} Sep 12 17:35:34.970468 kubelet[2724]: E0912 17:35:34.970443 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d962edc-2c93-4f13-a932-db0a9095910d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:34.970557 kubelet[2724]: E0912 17:35:34.970476 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d962edc-2c93-4f13-a932-db0a9095910d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-qcj2j" podUID="6d962edc-2c93-4f13-a932-db0a9095910d" Sep 12 17:35:34.974034 containerd[1601]: time="2025-09-12T17:35:34.973938839Z" level=error msg="StopPodSandbox for \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\" failed" error="failed to destroy network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.974506 kubelet[2724]: E0912 17:35:34.974386 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:35:34.974668 kubelet[2724]: E0912 17:35:34.974518 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad"} Sep 12 17:35:34.974718 kubelet[2724]: E0912 17:35:34.974597 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"470162f6-a6a8-4102-8800-2ba86b4652d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:34.974795 kubelet[2724]: E0912 17:35:34.974728 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"470162f6-a6a8-4102-8800-2ba86b4652d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-x5bfz" podUID="470162f6-a6a8-4102-8800-2ba86b4652d8" Sep 12 17:35:34.984649 containerd[1601]: time="2025-09-12T17:35:34.984595347Z" level=error msg="StopPodSandbox for \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\" failed" error="failed to destroy network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.985417 kubelet[2724]: E0912 17:35:34.985120 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:35:34.985417 kubelet[2724]: E0912 17:35:34.985189 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151"} Sep 12 17:35:34.985417 kubelet[2724]: E0912 17:35:34.985231 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c01b22f9-f886-420b-b430-8234a928fb35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:34.985417 kubelet[2724]: E0912 17:35:34.985268 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c01b22f9-f886-420b-b430-8234a928fb35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kvzhb" podUID="c01b22f9-f886-420b-b430-8234a928fb35" Sep 12 17:35:34.990600 containerd[1601]: time="2025-09-12T17:35:34.990543504Z" level=error msg="StopPodSandbox for \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\" failed" error="failed to destroy network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.990928 kubelet[2724]: E0912 17:35:34.990878 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:35:34.991026 kubelet[2724]: E0912 17:35:34.990952 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd"} Sep 12 17:35:34.991026 kubelet[2724]: E0912 17:35:34.990997 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22dbceb8-3888-4c97-ae99-1a48c9c8116f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:34.991166 kubelet[2724]: E0912 17:35:34.991025 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22dbceb8-3888-4c97-ae99-1a48c9c8116f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65ddc98f95-bgcp8" podUID="22dbceb8-3888-4c97-ae99-1a48c9c8116f" Sep 12 17:35:34.992801 containerd[1601]: time="2025-09-12T17:35:34.992719890Z" level=error msg="StopPodSandbox for \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\" failed" error="failed to destroy network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.992947 kubelet[2724]: E0912 17:35:34.992882 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:35:34.992947 kubelet[2724]: E0912 17:35:34.992937 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176"} Sep 12 17:35:34.993025 kubelet[2724]: E0912 17:35:34.992965 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45ffb3eb-a3d1-424f-9934-ed6fe54575da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:34.993025 kubelet[2724]: E0912 17:35:34.992987 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45ffb3eb-a3d1-424f-9934-ed6fe54575da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pg5rh" podUID="45ffb3eb-a3d1-424f-9934-ed6fe54575da" Sep 12 17:35:34.995385 containerd[1601]: time="2025-09-12T17:35:34.995341602Z" level=error msg="StopPodSandbox for \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\" failed" error="failed to destroy network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.996308 kubelet[2724]: E0912 17:35:34.996275 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:35:34.996379 kubelet[2724]: E0912 17:35:34.996313 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801"} Sep 12 17:35:34.996379 kubelet[2724]: E0912 17:35:34.996340 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30910c1c-2d0b-4258-860d-9f0d09a1d5af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:34.996379 kubelet[2724]: E0912 17:35:34.996361 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30910c1c-2d0b-4258-860d-9f0d09a1d5af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-689c48fdcf-95pkx" podUID="30910c1c-2d0b-4258-860d-9f0d09a1d5af" Sep 12 17:35:34.997268 containerd[1601]: time="2025-09-12T17:35:34.996843612Z" level=error msg="StopPodSandbox for \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\" failed" error="failed to destroy network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:34.997345 kubelet[2724]: E0912 17:35:34.996997 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:35:34.997345 kubelet[2724]: E0912 17:35:34.997020 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0"} Sep 12 17:35:34.997345 kubelet[2724]: E0912 17:35:34.997040 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7b2ca51-b8fe-48dd-93cf-b98746e57dea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:34.997345 kubelet[2724]: E0912 17:35:34.997057 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7b2ca51-b8fe-48dd-93cf-b98746e57dea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-689c48fdcf-qf7qr" podUID="e7b2ca51-b8fe-48dd-93cf-b98746e57dea" Sep 12 17:35:35.001596 containerd[1601]: time="2025-09-12T17:35:35.001545219Z" level=error msg="StopPodSandbox for \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\" failed" error="failed to destroy network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:35.001722 kubelet[2724]: E0912 17:35:35.001686 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:35:35.001767 kubelet[2724]: E0912 17:35:35.001726 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b"} Sep 12 17:35:35.001767 kubelet[2724]: E0912 17:35:35.001751 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:35.001944 kubelet[2724]: E0912 17:35:35.001768 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c45c567bb-fg9p6" podUID="eaa01bb9-7882-4ad4-ae32-48a47624b31c" Sep 12 17:35:41.532668 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:41762.service - OpenSSH per-connection server daemon (10.0.0.1:41762). Sep 12 17:35:41.575598 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 41762 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:35:41.576122 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:41.582977 systemd-logind[1577]: New session 8 of user core. Sep 12 17:35:41.586925 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:35:42.106105 sshd[3948]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:42.110850 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:41762.service: Deactivated successfully. Sep 12 17:35:42.116982 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:35:42.118175 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:35:42.119494 systemd-logind[1577]: Removed session 8. Sep 12 17:35:42.500007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782837507.mount: Deactivated successfully. Sep 12 17:35:43.633935 containerd[1601]: time="2025-09-12T17:35:43.633852277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:43.634827 containerd[1601]: time="2025-09-12T17:35:43.634759169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:35:43.639989 containerd[1601]: time="2025-09-12T17:35:43.639774502Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:43.643553 containerd[1601]: time="2025-09-12T17:35:43.643498641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:43.644445 containerd[1601]: time="2025-09-12T17:35:43.644296377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.761512048s" Sep 12 17:35:43.644445 containerd[1601]: time="2025-09-12T17:35:43.644363634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:35:43.658291 containerd[1601]: time="2025-09-12T17:35:43.658247920Z" level=info msg="CreateContainer within sandbox \"1e27fb40583a7c7fbf1640e7718d73e9e66f9ad66150ac69294c3660d4ba8aa2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:35:43.703303 containerd[1601]: time="2025-09-12T17:35:43.703226653Z" level=info msg="CreateContainer within sandbox \"1e27fb40583a7c7fbf1640e7718d73e9e66f9ad66150ac69294c3660d4ba8aa2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41e3ddfcb62ff204ce99cd11f775fa60d8ffb703695878162988b715509cc64d\"" Sep 12 17:35:43.704129 containerd[1601]: time="2025-09-12T17:35:43.704086897Z" level=info msg="StartContainer for \"41e3ddfcb62ff204ce99cd11f775fa60d8ffb703695878162988b715509cc64d\"" Sep 12 17:35:44.259829 containerd[1601]: time="2025-09-12T17:35:44.259726299Z" level=info msg="StartContainer for \"41e3ddfcb62ff204ce99cd11f775fa60d8ffb703695878162988b715509cc64d\" returns successfully" Sep 12 17:35:44.264043 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:35:44.264236 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:35:44.351279 kubelet[2724]: I0912 17:35:44.349785 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-khhwh" podStartSLOduration=1.291857708 podStartE2EDuration="25.349735204s" podCreationTimestamp="2025-09-12 17:35:19 +0000 UTC" firstStartedPulling="2025-09-12 17:35:19.587347304 +0000 UTC m=+19.267948939" lastFinishedPulling="2025-09-12 17:35:43.64522481 +0000 UTC m=+43.325826435" observedRunningTime="2025-09-12 17:35:44.349566286 +0000 UTC m=+44.030167921" watchObservedRunningTime="2025-09-12 17:35:44.349735204 +0000 UTC m=+44.030336839" Sep 12 17:35:44.487542 containerd[1601]: time="2025-09-12T17:35:44.485813943Z" level=info msg="StopPodSandbox for \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\"" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.599 [INFO][4052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.600 [INFO][4052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" iface="eth0" netns="/var/run/netns/cni-1c102e42-689e-b1fa-c81d-0818bc9640fe" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.601 [INFO][4052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" iface="eth0" netns="/var/run/netns/cni-1c102e42-689e-b1fa-c81d-0818bc9640fe" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.601 [INFO][4052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" iface="eth0" netns="/var/run/netns/cni-1c102e42-689e-b1fa-c81d-0818bc9640fe" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.601 [INFO][4052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.601 [INFO][4052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.667 [INFO][4065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.667 [INFO][4065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.667 [INFO][4065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.674 [WARNING][4065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.674 [INFO][4065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.676 [INFO][4065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:44.683720 containerd[1601]: 2025-09-12 17:35:44.679 [INFO][4052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:35:44.684538 containerd[1601]: time="2025-09-12T17:35:44.683941848Z" level=info msg="TearDown network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\" successfully" Sep 12 17:35:44.684538 containerd[1601]: time="2025-09-12T17:35:44.683986322Z" level=info msg="StopPodSandbox for \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\" returns successfully" Sep 12 17:35:44.687030 systemd[1]: run-netns-cni\x2d1c102e42\x2d689e\x2db1fa\x2dc81d\x2d0818bc9640fe.mount: Deactivated successfully. Sep 12 17:35:44.818783 kubelet[2724]: I0912 17:35:44.818716 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgm89\" (UniqueName: \"kubernetes.io/projected/eaa01bb9-7882-4ad4-ae32-48a47624b31c-kube-api-access-sgm89\") pod \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\" (UID: \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\") " Sep 12 17:35:44.818783 kubelet[2724]: I0912 17:35:44.818771 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-backend-key-pair\") pod \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\" (UID: \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\") " Sep 12 17:35:44.818783 kubelet[2724]: I0912 17:35:44.818794 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-ca-bundle\") pod \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\" (UID: \"eaa01bb9-7882-4ad4-ae32-48a47624b31c\") " Sep 12 17:35:44.819323 kubelet[2724]: I0912 17:35:44.819286 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eaa01bb9-7882-4ad4-ae32-48a47624b31c" (UID: "eaa01bb9-7882-4ad4-ae32-48a47624b31c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:35:44.824331 kubelet[2724]: I0912 17:35:44.824282 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eaa01bb9-7882-4ad4-ae32-48a47624b31c" (UID: "eaa01bb9-7882-4ad4-ae32-48a47624b31c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:35:44.825604 kubelet[2724]: I0912 17:35:44.825568 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa01bb9-7882-4ad4-ae32-48a47624b31c-kube-api-access-sgm89" (OuterVolumeSpecName: "kube-api-access-sgm89") pod "eaa01bb9-7882-4ad4-ae32-48a47624b31c" (UID: "eaa01bb9-7882-4ad4-ae32-48a47624b31c"). InnerVolumeSpecName "kube-api-access-sgm89". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:35:44.827122 systemd[1]: var-lib-kubelet-pods-eaa01bb9\x2d7882\x2d4ad4\x2dae32\x2d48a47624b31c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:35:44.827351 systemd[1]: var-lib-kubelet-pods-eaa01bb9\x2d7882\x2d4ad4\x2dae32\x2d48a47624b31c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsgm89.mount: Deactivated successfully. Sep 12 17:35:44.919187 kubelet[2724]: I0912 17:35:44.919106 2724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgm89\" (UniqueName: \"kubernetes.io/projected/eaa01bb9-7882-4ad4-ae32-48a47624b31c-kube-api-access-sgm89\") on node \"localhost\" DevicePath \"\"" Sep 12 17:35:44.919187 kubelet[2724]: I0912 17:35:44.919163 2724 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:35:44.919187 kubelet[2724]: I0912 17:35:44.919175 2724 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaa01bb9-7882-4ad4-ae32-48a47624b31c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:35:45.522851 kubelet[2724]: I0912 17:35:45.522771 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58991b52-842b-4123-af40-737fffcc7b19-whisker-backend-key-pair\") pod \"whisker-596795f569-zlqt9\" (UID: \"58991b52-842b-4123-af40-737fffcc7b19\") " pod="calico-system/whisker-596795f569-zlqt9" Sep 12 17:35:45.522851 kubelet[2724]: I0912 17:35:45.522845 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrzw\" (UniqueName: \"kubernetes.io/projected/58991b52-842b-4123-af40-737fffcc7b19-kube-api-access-7xrzw\") pod \"whisker-596795f569-zlqt9\" (UID: \"58991b52-842b-4123-af40-737fffcc7b19\") " pod="calico-system/whisker-596795f569-zlqt9" Sep 12 17:35:45.523400 kubelet[2724]: I0912 17:35:45.522876 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58991b52-842b-4123-af40-737fffcc7b19-whisker-ca-bundle\") pod \"whisker-596795f569-zlqt9\" (UID: \"58991b52-842b-4123-af40-737fffcc7b19\") " pod="calico-system/whisker-596795f569-zlqt9" Sep 12 17:35:45.672623 containerd[1601]: time="2025-09-12T17:35:45.672556932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596795f569-zlqt9,Uid:58991b52-842b-4123-af40-737fffcc7b19,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:46.141543 systemd-networkd[1269]: calib55e790a3f0: Link UP Sep 12 17:35:46.141965 systemd-networkd[1269]: calib55e790a3f0: Gained carrier Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:45.972 [INFO][4211] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:45.988 [INFO][4211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--596795f569--zlqt9-eth0 whisker-596795f569- calico-system 58991b52-842b-4123-af40-737fffcc7b19 947 0 2025-09-12 17:35:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:596795f569 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-596795f569-zlqt9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib55e790a3f0 [] [] }} ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:45.988 [INFO][4211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-eth0" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.029 [INFO][4224] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" HandleID="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Workload="localhost-k8s-whisker--596795f569--zlqt9-eth0" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.030 [INFO][4224] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" HandleID="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Workload="localhost-k8s-whisker--596795f569--zlqt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-596795f569-zlqt9", "timestamp":"2025-09-12 17:35:46.029884044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.030 [INFO][4224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.030 [INFO][4224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.030 [INFO][4224] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.037 [INFO][4224] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.043 [INFO][4224] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.047 [INFO][4224] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.049 [INFO][4224] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.052 [INFO][4224] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.052 [INFO][4224] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.053 [INFO][4224] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88 Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.060 [INFO][4224] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.130 [INFO][4224] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.130 [INFO][4224] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" host="localhost" Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.130 [INFO][4224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:46.307560 containerd[1601]: 2025-09-12 17:35:46.130 [INFO][4224] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" HandleID="k8s-pod-network.beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Workload="localhost-k8s-whisker--596795f569--zlqt9-eth0" Sep 12 17:35:46.308893 containerd[1601]: 2025-09-12 17:35:46.134 [INFO][4211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--596795f569--zlqt9-eth0", GenerateName:"whisker-596795f569-", Namespace:"calico-system", SelfLink:"", UID:"58991b52-842b-4123-af40-737fffcc7b19", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"596795f569", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-596795f569-zlqt9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib55e790a3f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:46.308893 containerd[1601]: 2025-09-12 17:35:46.134 [INFO][4211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-eth0" Sep 12 17:35:46.308893 containerd[1601]: 2025-09-12 17:35:46.134 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib55e790a3f0 ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-eth0" Sep 12 17:35:46.308893 containerd[1601]: 2025-09-12 17:35:46.142 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-eth0" Sep 12 17:35:46.308893 containerd[1601]: 2025-09-12 17:35:46.142 [INFO][4211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--596795f569--zlqt9-eth0", GenerateName:"whisker-596795f569-", Namespace:"calico-system", SelfLink:"", UID:"58991b52-842b-4123-af40-737fffcc7b19", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"596795f569", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88", Pod:"whisker-596795f569-zlqt9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib55e790a3f0", MAC:"4a:ad:f9:51:63:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:46.308893 containerd[1601]: 2025-09-12 17:35:46.303 [INFO][4211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88" Namespace="calico-system" Pod="whisker-596795f569-zlqt9" WorkloadEndpoint="localhost-k8s-whisker--596795f569--zlqt9-eth0" Sep 12 17:35:46.422816 containerd[1601]: time="2025-09-12T17:35:46.421842543Z" level=info msg="StopPodSandbox for \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\"" Sep 12 17:35:46.422816 containerd[1601]: time="2025-09-12T17:35:46.422284032Z" level=info msg="StopPodSandbox for \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\"" Sep 12 17:35:46.422816 containerd[1601]: time="2025-09-12T17:35:46.421859135Z" level=info msg="StopPodSandbox for \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\"" Sep 12 17:35:46.423860 kubelet[2724]: I0912 17:35:46.423815 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa01bb9-7882-4ad4-ae32-48a47624b31c" path="/var/lib/kubelet/pods/eaa01bb9-7882-4ad4-ae32-48a47624b31c/volumes" Sep 12 17:35:46.495653 containerd[1601]: time="2025-09-12T17:35:46.495307402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:46.495653 containerd[1601]: time="2025-09-12T17:35:46.495425434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:46.495653 containerd[1601]: time="2025-09-12T17:35:46.495457704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:46.497596 containerd[1601]: time="2025-09-12T17:35:46.497141084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:46.538756 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:46.574180 containerd[1601]: time="2025-09-12T17:35:46.574132470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596795f569-zlqt9,Uid:58991b52-842b-4123-af40-737fffcc7b19,Namespace:calico-system,Attempt:0,} returns sandbox id \"beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88\"" Sep 12 17:35:46.576349 containerd[1601]: time="2025-09-12T17:35:46.576319273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.493 [INFO][4275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.494 [INFO][4275] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" iface="eth0" netns="/var/run/netns/cni-6cccc66f-2926-db8b-a5b2-11fa82432a9e" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.495 [INFO][4275] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" iface="eth0" netns="/var/run/netns/cni-6cccc66f-2926-db8b-a5b2-11fa82432a9e" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.495 [INFO][4275] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" iface="eth0" netns="/var/run/netns/cni-6cccc66f-2926-db8b-a5b2-11fa82432a9e" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.495 [INFO][4275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.495 [INFO][4275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.551 [INFO][4324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.551 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.551 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.564 [WARNING][4324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.564 [INFO][4324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.567 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:46.576753 containerd[1601]: 2025-09-12 17:35:46.573 [INFO][4275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:35:46.577263 containerd[1601]: time="2025-09-12T17:35:46.577172274Z" level=info msg="TearDown network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\" successfully" Sep 12 17:35:46.577263 containerd[1601]: time="2025-09-12T17:35:46.577189036Z" level=info msg="StopPodSandbox for \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\" returns successfully" Sep 12 17:35:46.577831 containerd[1601]: time="2025-09-12T17:35:46.577789642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qcj2j,Uid:6d962edc-2c93-4f13-a932-db0a9095910d,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.492 [INFO][4277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.492 [INFO][4277] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" iface="eth0" netns="/var/run/netns/cni-cea76afe-63c3-d16d-e39e-8abd3d08cdd3" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.493 [INFO][4277] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" iface="eth0" netns="/var/run/netns/cni-cea76afe-63c3-d16d-e39e-8abd3d08cdd3" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.493 [INFO][4277] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" iface="eth0" netns="/var/run/netns/cni-cea76afe-63c3-d16d-e39e-8abd3d08cdd3" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.493 [INFO][4277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.493 [INFO][4277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.551 [INFO][4321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.552 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.568 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.577 [WARNING][4321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.577 [INFO][4321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.582 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:46.588234 containerd[1601]: 2025-09-12 17:35:46.585 [INFO][4277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:35:46.588734 containerd[1601]: time="2025-09-12T17:35:46.588400454Z" level=info msg="TearDown network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\" successfully" Sep 12 17:35:46.588734 containerd[1601]: time="2025-09-12T17:35:46.588485504Z" level=info msg="StopPodSandbox for \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\" returns successfully" Sep 12 17:35:46.588936 kubelet[2724]: E0912 17:35:46.588906 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:46.590267 containerd[1601]: time="2025-09-12T17:35:46.589653255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5bfz,Uid:470162f6-a6a8-4102-8800-2ba86b4652d8,Namespace:kube-system,Attempt:1,}" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.483 [INFO][4276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.484 [INFO][4276] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" iface="eth0" netns="/var/run/netns/cni-93801182-bfab-c33b-9fcc-d44c0b8985a9" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.485 [INFO][4276] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" iface="eth0" netns="/var/run/netns/cni-93801182-bfab-c33b-9fcc-d44c0b8985a9" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.486 [INFO][4276] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" iface="eth0" netns="/var/run/netns/cni-93801182-bfab-c33b-9fcc-d44c0b8985a9" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.486 [INFO][4276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.486 [INFO][4276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.553 [INFO][4310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.553 [INFO][4310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.582 [INFO][4310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.591 [WARNING][4310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.591 [INFO][4310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.593 [INFO][4310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:46.601069 containerd[1601]: 2025-09-12 17:35:46.597 [INFO][4276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:35:46.601659 containerd[1601]: time="2025-09-12T17:35:46.601213478Z" level=info msg="TearDown network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\" successfully" Sep 12 17:35:46.601659 containerd[1601]: time="2025-09-12T17:35:46.601243724Z" level=info msg="StopPodSandbox for \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\" returns successfully" Sep 12 17:35:46.602027 containerd[1601]: time="2025-09-12T17:35:46.601999854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-95pkx,Uid:30910c1c-2d0b-4258-860d-9f0d09a1d5af,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:35:46.658903 systemd[1]: run-netns-cni\x2d93801182\x2dbfab\x2dc33b\x2d9fcc\x2dd44c0b8985a9.mount: Deactivated successfully. Sep 12 17:35:46.659143 systemd[1]: run-netns-cni\x2dcea76afe\x2d63c3\x2dd16d\x2de39e\x2d8abd3d08cdd3.mount: Deactivated successfully. Sep 12 17:35:46.659322 systemd[1]: run-netns-cni\x2d6cccc66f\x2d2926\x2ddb8b\x2da5b2\x2d11fa82432a9e.mount: Deactivated successfully. Sep 12 17:35:46.761427 systemd-networkd[1269]: cali7cb0a604231: Link UP Sep 12 17:35:46.762792 systemd-networkd[1269]: cali7cb0a604231: Gained carrier Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.655 [INFO][4363] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.670 [INFO][4363] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0 coredns-7c65d6cfc9- kube-system 470162f6-a6a8-4102-8800-2ba86b4652d8 961 0 2025-09-12 17:35:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-x5bfz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7cb0a604231 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.671 [INFO][4363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.712 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" HandleID="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.712 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" HandleID="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-x5bfz", "timestamp":"2025-09-12 17:35:46.712621168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.712 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.713 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.713 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.721 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.725 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.729 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.731 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.734 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.734 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.736 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.743 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.749 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.749 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" host="localhost" Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.749 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:46.777849 containerd[1601]: 2025-09-12 17:35:46.749 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" HandleID="k8s-pod-network.893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.778587 containerd[1601]: 2025-09-12 17:35:46.753 [INFO][4363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"470162f6-a6a8-4102-8800-2ba86b4652d8", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-x5bfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb0a604231", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:46.778587 containerd[1601]: 2025-09-12 17:35:46.756 [INFO][4363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.778587 containerd[1601]: 2025-09-12 17:35:46.757 [INFO][4363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cb0a604231 ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.778587 containerd[1601]: 2025-09-12 17:35:46.761 [INFO][4363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.778587 containerd[1601]: 2025-09-12 17:35:46.765 [INFO][4363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"470162f6-a6a8-4102-8800-2ba86b4652d8", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e", Pod:"coredns-7c65d6cfc9-x5bfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb0a604231", MAC:"b6:bb:ca:de:76:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:46.778587 containerd[1601]: 2025-09-12 17:35:46.774 [INFO][4363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5bfz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:35:46.798662 containerd[1601]: time="2025-09-12T17:35:46.798505023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:46.798662 containerd[1601]: time="2025-09-12T17:35:46.798655206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:46.798908 containerd[1601]: time="2025-09-12T17:35:46.798672879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:46.799082 containerd[1601]: time="2025-09-12T17:35:46.798926918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:46.848985 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:46.886183 systemd-networkd[1269]: calib1004327243: Link UP Sep 12 17:35:46.889280 systemd-networkd[1269]: calib1004327243: Gained carrier Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.653 [INFO][4374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.677 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--qcj2j-eth0 goldmane-7988f88666- calico-system 6d962edc-2c93-4f13-a932-db0a9095910d 963 0 2025-09-12 17:35:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-qcj2j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib1004327243 [] [] }} ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.677 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.715 [INFO][4409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" HandleID="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.716 [INFO][4409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" HandleID="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502a30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-qcj2j", "timestamp":"2025-09-12 17:35:46.715630865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.716 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.750 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.750 [INFO][4409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.829 [INFO][4409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.840 [INFO][4409] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.848 [INFO][4409] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.852 [INFO][4409] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.855 [INFO][4409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.855 [INFO][4409] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.858 [INFO][4409] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0 Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.863 [INFO][4409] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.872 [INFO][4409] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.873 [INFO][4409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" host="localhost" Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.873 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:46.903361 containerd[1601]: 2025-09-12 17:35:46.873 [INFO][4409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" HandleID="k8s-pod-network.e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.904506 containerd[1601]: 2025-09-12 17:35:46.880 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--qcj2j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d962edc-2c93-4f13-a932-db0a9095910d", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-qcj2j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1004327243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:46.904506 containerd[1601]: 2025-09-12 17:35:46.881 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.904506 containerd[1601]: 2025-09-12 17:35:46.882 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1004327243 ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.904506 containerd[1601]: 2025-09-12 17:35:46.885 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.904506 containerd[1601]: 2025-09-12 17:35:46.886 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--qcj2j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d962edc-2c93-4f13-a932-db0a9095910d", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0", Pod:"goldmane-7988f88666-qcj2j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1004327243", MAC:"2e:19:db:05:2b:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:46.904506 containerd[1601]: 2025-09-12 17:35:46.900 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0" Namespace="calico-system" Pod="goldmane-7988f88666-qcj2j" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:35:46.913983 containerd[1601]: time="2025-09-12T17:35:46.913942930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5bfz,Uid:470162f6-a6a8-4102-8800-2ba86b4652d8,Namespace:kube-system,Attempt:1,} returns sandbox id \"893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e\"" Sep 12 17:35:46.915752 kubelet[2724]: E0912 17:35:46.915716 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:46.919140 containerd[1601]: time="2025-09-12T17:35:46.919106967Z" level=info msg="CreateContainer within sandbox \"893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:35:46.932934 containerd[1601]: time="2025-09-12T17:35:46.932796921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:46.932934 containerd[1601]: time="2025-09-12T17:35:46.932869407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:46.932934 containerd[1601]: time="2025-09-12T17:35:46.932885036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:46.933297 containerd[1601]: time="2025-09-12T17:35:46.933006234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:46.962016 containerd[1601]: time="2025-09-12T17:35:46.961963714Z" level=info msg="CreateContainer within sandbox \"893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f4043709e1128d623ea55a2bf0663a49bd11672c2cc02210830caafa3a8aac0\"" Sep 12 17:35:46.964263 containerd[1601]: time="2025-09-12T17:35:46.963815411Z" level=info msg="StartContainer for \"4f4043709e1128d623ea55a2bf0663a49bd11672c2cc02210830caafa3a8aac0\"" Sep 12 17:35:46.980890 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:46.992824 systemd-networkd[1269]: calid30283e9589: Link UP Sep 12 17:35:46.996608 systemd-networkd[1269]: calid30283e9589: Gained carrier Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.700 [INFO][4389] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.714 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0 calico-apiserver-689c48fdcf- calico-apiserver 30910c1c-2d0b-4258-860d-9f0d09a1d5af 960 0 2025-09-12 17:35:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:689c48fdcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-689c48fdcf-95pkx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid30283e9589 [] [] }} ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.714 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.751 [INFO][4422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" HandleID="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.756 [INFO][4422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" HandleID="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-689c48fdcf-95pkx", "timestamp":"2025-09-12 17:35:46.751615591 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.756 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.874 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.874 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.922 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.939 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.948 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.951 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.953 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.953 [INFO][4422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.955 [INFO][4422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.964 [INFO][4422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.978 [INFO][4422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.978 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" host="localhost" Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.978 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:47.022746 containerd[1601]: 2025-09-12 17:35:46.978 [INFO][4422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" HandleID="k8s-pod-network.9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:47.023624 containerd[1601]: 2025-09-12 17:35:46.985 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"30910c1c-2d0b-4258-860d-9f0d09a1d5af", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-689c48fdcf-95pkx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid30283e9589", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:47.023624 containerd[1601]: 2025-09-12 17:35:46.986 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:47.023624 containerd[1601]: 2025-09-12 17:35:46.986 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid30283e9589 ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:47.023624 containerd[1601]: 2025-09-12 17:35:46.996 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:47.023624 containerd[1601]: 2025-09-12 17:35:46.999 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"30910c1c-2d0b-4258-860d-9f0d09a1d5af", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a", Pod:"calico-apiserver-689c48fdcf-95pkx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid30283e9589", MAC:"6e:a4:1e:53:27:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:47.023624 containerd[1601]: 2025-09-12 17:35:47.010 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-95pkx" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:35:47.113668 containerd[1601]: time="2025-09-12T17:35:47.113534302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:47.116454 containerd[1601]: time="2025-09-12T17:35:47.113637092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:47.116454 containerd[1601]: time="2025-09-12T17:35:47.113657431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:47.116454 containerd[1601]: time="2025-09-12T17:35:47.113783446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:47.119334 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:41770.service - OpenSSH per-connection server daemon (10.0.0.1:41770). Sep 12 17:35:47.127958 containerd[1601]: time="2025-09-12T17:35:47.126769068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qcj2j,Uid:6d962edc-2c93-4f13-a932-db0a9095910d,Namespace:calico-system,Attempt:1,} returns sandbox id \"e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0\"" Sep 12 17:35:47.164667 containerd[1601]: time="2025-09-12T17:35:47.164606236Z" level=info msg="StartContainer for \"4f4043709e1128d623ea55a2bf0663a49bd11672c2cc02210830caafa3a8aac0\" returns successfully" Sep 12 17:35:47.176047 sshd[4597]: Accepted publickey for core from 10.0.0.1 port 41770 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:35:47.178452 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:47.179280 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:47.184765 systemd-logind[1577]: New session 9 of user core. Sep 12 17:35:47.196731 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:35:47.215241 containerd[1601]: time="2025-09-12T17:35:47.215191216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-95pkx,Uid:30910c1c-2d0b-4258-860d-9f0d09a1d5af,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a\"" Sep 12 17:35:47.277210 kubelet[2724]: E0912 17:35:47.276951 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:47.411730 sshd[4597]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:47.416374 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:41770.service: Deactivated successfully. Sep 12 17:35:47.419230 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:35:47.419991 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:35:47.423438 containerd[1601]: time="2025-09-12T17:35:47.422139904Z" level=info msg="StopPodSandbox for \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\"" Sep 12 17:35:47.422215 systemd-logind[1577]: Removed session 9. Sep 12 17:35:47.657346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3443841497.mount: Deactivated successfully. Sep 12 17:35:47.683270 systemd-networkd[1269]: calib55e790a3f0: Gained IPv6LL Sep 12 17:35:47.905594 kubelet[2724]: I0912 17:35:47.905506 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x5bfz" podStartSLOduration=41.905482027 podStartE2EDuration="41.905482027s" podCreationTimestamp="2025-09-12 17:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:47.705194044 +0000 UTC m=+47.385795679" watchObservedRunningTime="2025-09-12 17:35:47.905482027 +0000 UTC m=+47.586083662" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.905 [INFO][4663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.905 [INFO][4663] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" iface="eth0" netns="/var/run/netns/cni-32721ab5-7e95-28b6-19fd-ff1e460de6e0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.905 [INFO][4663] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" iface="eth0" netns="/var/run/netns/cni-32721ab5-7e95-28b6-19fd-ff1e460de6e0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.905 [INFO][4663] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" iface="eth0" netns="/var/run/netns/cni-32721ab5-7e95-28b6-19fd-ff1e460de6e0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.905 [INFO][4663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.905 [INFO][4663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.933 [INFO][4674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.933 [INFO][4674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.933 [INFO][4674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.938 [WARNING][4674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.938 [INFO][4674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.940 [INFO][4674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:47.959590 containerd[1601]: 2025-09-12 17:35:47.946 [INFO][4663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:35:47.967441 containerd[1601]: time="2025-09-12T17:35:47.966567542Z" level=info msg="TearDown network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\" successfully" Sep 12 17:35:47.967441 containerd[1601]: time="2025-09-12T17:35:47.966642789Z" level=info msg="StopPodSandbox for \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\" returns successfully" Sep 12 17:35:47.967597 containerd[1601]: time="2025-09-12T17:35:47.967560898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-qf7qr,Uid:e7b2ca51-b8fe-48dd-93cf-b98746e57dea,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:35:47.971893 systemd[1]: run-netns-cni\x2d32721ab5\x2d7e95\x2d28b6\x2d19fd\x2dff1e460de6e0.mount: Deactivated successfully. Sep 12 17:35:48.195482 systemd-networkd[1269]: cali7cb0a604231: Gained IPv6LL Sep 12 17:35:48.282152 kubelet[2724]: E0912 17:35:48.282034 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:48.421741 containerd[1601]: time="2025-09-12T17:35:48.421687144Z" level=info msg="StopPodSandbox for \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\"" Sep 12 17:35:48.421871 containerd[1601]: time="2025-09-12T17:35:48.421763892Z" level=info msg="StopPodSandbox for \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\"" Sep 12 17:35:48.514620 systemd-networkd[1269]: calib1004327243: Gained IPv6LL Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.594 [INFO][4728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.594 [INFO][4728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" iface="eth0" netns="/var/run/netns/cni-e0ab11a7-7824-e8e4-f457-478e97888823" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.594 [INFO][4728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" iface="eth0" netns="/var/run/netns/cni-e0ab11a7-7824-e8e4-f457-478e97888823" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.595 [INFO][4728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" iface="eth0" netns="/var/run/netns/cni-e0ab11a7-7824-e8e4-f457-478e97888823" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.595 [INFO][4728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.595 [INFO][4728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.624 [INFO][4744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.624 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.624 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.672 [WARNING][4744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.672 [INFO][4744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.675 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:48.682689 containerd[1601]: 2025-09-12 17:35:48.679 [INFO][4728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:35:48.688718 containerd[1601]: time="2025-09-12T17:35:48.688360212Z" level=info msg="TearDown network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\" successfully" Sep 12 17:35:48.688718 containerd[1601]: time="2025-09-12T17:35:48.688395370Z" level=info msg="StopPodSandbox for \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\" returns successfully" Sep 12 17:35:48.687384 systemd[1]: run-netns-cni\x2de0ab11a7\x2d7824\x2de8e4\x2df457\x2d478e97888823.mount: Deactivated successfully. Sep 12 17:35:48.689390 containerd[1601]: time="2025-09-12T17:35:48.689332856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65ddc98f95-bgcp8,Uid:22dbceb8-3888-4c97-ae99-1a48c9c8116f,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.618 [INFO][4727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.618 [INFO][4727] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" iface="eth0" netns="/var/run/netns/cni-081f9c7b-92cc-7d40-9f7b-a362d2b3825e" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.619 [INFO][4727] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" iface="eth0" netns="/var/run/netns/cni-081f9c7b-92cc-7d40-9f7b-a362d2b3825e" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.619 [INFO][4727] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" iface="eth0" netns="/var/run/netns/cni-081f9c7b-92cc-7d40-9f7b-a362d2b3825e" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.619 [INFO][4727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.619 [INFO][4727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.644 [INFO][4751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.644 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.675 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.682 [WARNING][4751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.682 [INFO][4751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.685 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:48.694057 containerd[1601]: 2025-09-12 17:35:48.690 [INFO][4727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:35:48.694415 containerd[1601]: time="2025-09-12T17:35:48.694283520Z" level=info msg="TearDown network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\" successfully" Sep 12 17:35:48.694415 containerd[1601]: time="2025-09-12T17:35:48.694319510Z" level=info msg="StopPodSandbox for \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\" returns successfully" Sep 12 17:35:48.694758 kubelet[2724]: E0912 17:35:48.694728 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:48.695737 containerd[1601]: time="2025-09-12T17:35:48.695694872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvzhb,Uid:c01b22f9-f886-420b-b430-8234a928fb35,Namespace:kube-system,Attempt:1,}" Sep 12 17:35:48.698342 systemd[1]: run-netns-cni\x2d081f9c7b\x2d92cc\x2d7d40\x2d9f7b\x2da362d2b3825e.mount: Deactivated successfully. Sep 12 17:35:48.770696 systemd-networkd[1269]: calid30283e9589: Gained IPv6LL Sep 12 17:35:49.025227 kubelet[2724]: I0912 17:35:49.025086 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:49.025693 kubelet[2724]: E0912 17:35:49.025658 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:49.286165 kubelet[2724]: E0912 17:35:49.285953 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:49.677993 systemd-networkd[1269]: cali83992928d1c: Link UP Sep 12 17:35:49.678577 systemd-networkd[1269]: cali83992928d1c: Gained carrier Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:48.995 [INFO][4765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.004 [INFO][4765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0 calico-apiserver-689c48fdcf- calico-apiserver e7b2ca51-b8fe-48dd-93cf-b98746e57dea 996 0 2025-09-12 17:35:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:689c48fdcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-689c48fdcf-qf7qr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83992928d1c [] [] }} ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.004 [INFO][4765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.029 [INFO][4779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" HandleID="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.029 [INFO][4779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" HandleID="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-689c48fdcf-qf7qr", "timestamp":"2025-09-12 17:35:49.029429014 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.029 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.029 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.029 [INFO][4779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.123 [INFO][4779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.178 [INFO][4779] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.365 [INFO][4779] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.380 [INFO][4779] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.384 [INFO][4779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.384 [INFO][4779] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.386 [INFO][4779] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499 Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.412 [INFO][4779] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.656 [INFO][4779] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.657 [INFO][4779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" host="localhost" Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.657 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:50.216902 containerd[1601]: 2025-09-12 17:35:49.657 [INFO][4779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" HandleID="k8s-pod-network.16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:50.218068 containerd[1601]: 2025-09-12 17:35:49.661 [INFO][4765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7b2ca51-b8fe-48dd-93cf-b98746e57dea", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-689c48fdcf-qf7qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83992928d1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:50.218068 containerd[1601]: 2025-09-12 17:35:49.662 [INFO][4765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:50.218068 containerd[1601]: 2025-09-12 17:35:49.662 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83992928d1c ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:50.218068 containerd[1601]: 2025-09-12 17:35:49.681 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:50.218068 containerd[1601]: 2025-09-12 17:35:49.685 [INFO][4765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7b2ca51-b8fe-48dd-93cf-b98746e57dea", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499", Pod:"calico-apiserver-689c48fdcf-qf7qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83992928d1c", MAC:"da:01:49:9d:cf:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:50.218068 containerd[1601]: 2025-09-12 17:35:50.213 [INFO][4765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499" Namespace="calico-apiserver" Pod="calico-apiserver-689c48fdcf-qf7qr" WorkloadEndpoint="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:35:50.335466 kernel: bpftool[4882]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:35:50.422743 containerd[1601]: time="2025-09-12T17:35:50.422661074Z" level=info msg="StopPodSandbox for \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\"" Sep 12 17:35:50.570876 containerd[1601]: time="2025-09-12T17:35:50.567989262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:50.570876 containerd[1601]: time="2025-09-12T17:35:50.568058245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:50.570876 containerd[1601]: time="2025-09-12T17:35:50.568073274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:50.570876 containerd[1601]: time="2025-09-12T17:35:50.568173979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:50.605076 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:50.635066 systemd-networkd[1269]: vxlan.calico: Link UP Sep 12 17:35:50.635075 systemd-networkd[1269]: vxlan.calico: Gained carrier Sep 12 17:35:50.655791 containerd[1601]: time="2025-09-12T17:35:50.655717355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689c48fdcf-qf7qr,Uid:e7b2ca51-b8fe-48dd-93cf-b98746e57dea,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499\"" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.608 [INFO][4893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.608 [INFO][4893] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" iface="eth0" netns="/var/run/netns/cni-ab795bbd-b20f-e555-1f13-9151be608a16" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.608 [INFO][4893] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" iface="eth0" netns="/var/run/netns/cni-ab795bbd-b20f-e555-1f13-9151be608a16" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.609 [INFO][4893] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" iface="eth0" netns="/var/run/netns/cni-ab795bbd-b20f-e555-1f13-9151be608a16" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.609 [INFO][4893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.609 [INFO][4893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.666 [INFO][4952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.676 [INFO][4952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.676 [INFO][4952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.770 [WARNING][4952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.771 [INFO][4952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.772 [INFO][4952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:50.782441 containerd[1601]: 2025-09-12 17:35:50.778 [INFO][4893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:35:50.782986 containerd[1601]: time="2025-09-12T17:35:50.782567240Z" level=info msg="TearDown network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\" successfully" Sep 12 17:35:50.782986 containerd[1601]: time="2025-09-12T17:35:50.782600955Z" level=info msg="StopPodSandbox for \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\" returns successfully" Sep 12 17:35:50.785474 containerd[1601]: time="2025-09-12T17:35:50.783443012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pg5rh,Uid:45ffb3eb-a3d1-424f-9934-ed6fe54575da,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:50.816922 systemd[1]: run-netns-cni\x2dab795bbd\x2db20f\x2de555\x2d1f13\x2d9151be608a16.mount: Deactivated successfully. Sep 12 17:35:51.147179 systemd-networkd[1269]: cali3fcd52487ae: Link UP Sep 12 17:35:51.148271 systemd-networkd[1269]: cali3fcd52487ae: Gained carrier Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.043 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0 calico-kube-controllers-65ddc98f95- calico-system 22dbceb8-3888-4c97-ae99-1a48c9c8116f 1004 0 2025-09-12 17:35:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65ddc98f95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-65ddc98f95-bgcp8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3fcd52487ae [] [] }} ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.043 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.080 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" HandleID="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.080 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" HandleID="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-65ddc98f95-bgcp8", "timestamp":"2025-09-12 17:35:51.080711604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.081 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.081 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.081 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.096 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.102 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.106 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.108 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.110 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.110 [INFO][5044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.112 [INFO][5044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.121 [INFO][5044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.139 [INFO][5044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.139 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" host="localhost" Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.139 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:51.176920 containerd[1601]: 2025-09-12 17:35:51.139 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" HandleID="k8s-pod-network.01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:51.177684 containerd[1601]: 2025-09-12 17:35:51.142 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0", GenerateName:"calico-kube-controllers-65ddc98f95-", Namespace:"calico-system", SelfLink:"", UID:"22dbceb8-3888-4c97-ae99-1a48c9c8116f", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65ddc98f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-65ddc98f95-bgcp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fcd52487ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:51.177684 containerd[1601]: 2025-09-12 17:35:51.143 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:51.177684 containerd[1601]: 2025-09-12 17:35:51.143 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fcd52487ae ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:51.177684 containerd[1601]: 2025-09-12 17:35:51.148 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:51.177684 containerd[1601]: 2025-09-12 17:35:51.149 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0", GenerateName:"calico-kube-controllers-65ddc98f95-", Namespace:"calico-system", SelfLink:"", UID:"22dbceb8-3888-4c97-ae99-1a48c9c8116f", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65ddc98f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d", Pod:"calico-kube-controllers-65ddc98f95-bgcp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fcd52487ae", MAC:"9a:da:34:61:9a:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:51.177684 containerd[1601]: 2025-09-12 17:35:51.173 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d" Namespace="calico-system" Pod="calico-kube-controllers-65ddc98f95-bgcp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:35:51.315864 containerd[1601]: time="2025-09-12T17:35:51.315756279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:51.316346 containerd[1601]: time="2025-09-12T17:35:51.315825743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:51.316346 containerd[1601]: time="2025-09-12T17:35:51.315874918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:51.316346 containerd[1601]: time="2025-09-12T17:35:51.316021662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:51.331182 systemd-networkd[1269]: cali83992928d1c: Gained IPv6LL Sep 12 17:35:51.347913 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:51.362161 systemd-networkd[1269]: cali43c7a3177a2: Link UP Sep 12 17:35:51.364595 systemd-networkd[1269]: cali43c7a3177a2: Gained carrier Sep 12 17:35:51.386597 containerd[1601]: time="2025-09-12T17:35:51.386527038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65ddc98f95-bgcp8,Uid:22dbceb8-3888-4c97-ae99-1a48c9c8116f,Namespace:calico-system,Attempt:1,} returns sandbox id \"01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d\"" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.191 [INFO][5052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0 coredns-7c65d6cfc9- kube-system c01b22f9-f886-420b-b430-8234a928fb35 1005 0 2025-09-12 17:35:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-kvzhb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43c7a3177a2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.191 [INFO][5052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.253 [INFO][5080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" HandleID="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.253 [INFO][5080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" HandleID="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-kvzhb", "timestamp":"2025-09-12 17:35:51.253450918 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.253 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.253 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.253 [INFO][5080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.262 [INFO][5080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.278 [INFO][5080] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.284 [INFO][5080] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.285 [INFO][5080] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.288 [INFO][5080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.288 [INFO][5080] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.289 [INFO][5080] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.308 [INFO][5080] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.353 [INFO][5080] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.353 [INFO][5080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" host="localhost" Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.353 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:51.418945 containerd[1601]: 2025-09-12 17:35:51.353 [INFO][5080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" HandleID="k8s-pod-network.47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:51.419748 containerd[1601]: 2025-09-12 17:35:51.357 [INFO][5052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c01b22f9-f886-420b-b430-8234a928fb35", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-kvzhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43c7a3177a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:51.419748 containerd[1601]: 2025-09-12 17:35:51.357 [INFO][5052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:51.419748 containerd[1601]: 2025-09-12 17:35:51.357 [INFO][5052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43c7a3177a2 ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:51.419748 containerd[1601]: 2025-09-12 17:35:51.364 [INFO][5052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:51.419748 containerd[1601]: 2025-09-12 17:35:51.365 [INFO][5052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c01b22f9-f886-420b-b430-8234a928fb35", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d", Pod:"coredns-7c65d6cfc9-kvzhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43c7a3177a2", MAC:"7a:e5:29:cc:2c:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:51.419748 containerd[1601]: 2025-09-12 17:35:51.414 [INFO][5052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvzhb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:35:51.494771 containerd[1601]: time="2025-09-12T17:35:51.474376529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:51.494771 containerd[1601]: time="2025-09-12T17:35:51.494199279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:51.494771 containerd[1601]: time="2025-09-12T17:35:51.494228245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:51.494771 containerd[1601]: time="2025-09-12T17:35:51.494379988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:51.540935 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:51.584648 systemd-networkd[1269]: cali3079316426f: Link UP Sep 12 17:35:51.587331 containerd[1601]: time="2025-09-12T17:35:51.587285812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvzhb,Uid:c01b22f9-f886-420b-b430-8234a928fb35,Namespace:kube-system,Attempt:1,} returns sandbox id \"47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d\"" Sep 12 17:35:51.588001 systemd-networkd[1269]: cali3079316426f: Gained carrier Sep 12 17:35:51.592465 kubelet[2724]: E0912 17:35:51.590858 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:51.596245 containerd[1601]: time="2025-09-12T17:35:51.596168617Z" level=info msg="CreateContainer within sandbox \"47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.429 [INFO][5126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pg5rh-eth0 csi-node-driver- calico-system 45ffb3eb-a3d1-424f-9934-ed6fe54575da 1026 0 2025-09-12 17:35:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pg5rh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3079316426f [] [] }} ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.429 [INFO][5126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.501 [INFO][5155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" HandleID="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.501 [INFO][5155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" HandleID="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000393ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pg5rh", "timestamp":"2025-09-12 17:35:51.50102003 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.501 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.502 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.502 [INFO][5155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.514 [INFO][5155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.522 [INFO][5155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.531 [INFO][5155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.535 [INFO][5155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.539 [INFO][5155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.539 [INFO][5155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.542 [INFO][5155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40 Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.548 [INFO][5155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.563 [INFO][5155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.564 [INFO][5155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" host="localhost" Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.564 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:51.617994 containerd[1601]: 2025-09-12 17:35:51.564 [INFO][5155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" HandleID="k8s-pod-network.d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:51.619211 containerd[1601]: 2025-09-12 17:35:51.574 [INFO][5126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pg5rh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45ffb3eb-a3d1-424f-9934-ed6fe54575da", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pg5rh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3079316426f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:51.619211 containerd[1601]: 2025-09-12 17:35:51.574 [INFO][5126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:51.619211 containerd[1601]: 2025-09-12 17:35:51.574 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3079316426f ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:51.619211 containerd[1601]: 2025-09-12 17:35:51.589 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:51.619211 containerd[1601]: 2025-09-12 17:35:51.590 [INFO][5126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pg5rh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45ffb3eb-a3d1-424f-9934-ed6fe54575da", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40", Pod:"csi-node-driver-pg5rh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3079316426f", MAC:"0e:d5:da:94:26:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:51.619211 containerd[1601]: 2025-09-12 17:35:51.610 [INFO][5126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40" Namespace="calico-system" Pod="csi-node-driver-pg5rh" WorkloadEndpoint="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:35:51.629752 containerd[1601]: time="2025-09-12T17:35:51.629655985Z" level=info msg="CreateContainer within sandbox \"47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"817f4475bf5ff46429bf4100e5b17d82268f798910a1ba636665801373956014\"" Sep 12 17:35:51.630653 containerd[1601]: time="2025-09-12T17:35:51.630612303Z" level=info msg="StartContainer for \"817f4475bf5ff46429bf4100e5b17d82268f798910a1ba636665801373956014\"" Sep 12 17:35:51.655920 containerd[1601]: time="2025-09-12T17:35:51.655737423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:51.655920 containerd[1601]: time="2025-09-12T17:35:51.655828078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:51.655920 containerd[1601]: time="2025-09-12T17:35:51.655851032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:51.656204 containerd[1601]: time="2025-09-12T17:35:51.655989590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:51.682758 containerd[1601]: time="2025-09-12T17:35:51.682612453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:51.684647 containerd[1601]: time="2025-09-12T17:35:51.684605882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:35:51.690680 containerd[1601]: time="2025-09-12T17:35:51.690520755Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:51.694995 containerd[1601]: time="2025-09-12T17:35:51.694938887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:51.695625 containerd[1601]: time="2025-09-12T17:35:51.695590615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 5.119231297s" Sep 12 17:35:51.695679 containerd[1601]: time="2025-09-12T17:35:51.695630081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:35:51.699514 containerd[1601]: time="2025-09-12T17:35:51.699347030Z" level=info msg="CreateContainer within sandbox \"beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:35:51.699514 containerd[1601]: time="2025-09-12T17:35:51.699377479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:35:51.702171 containerd[1601]: time="2025-09-12T17:35:51.702127749Z" level=info msg="StartContainer for \"817f4475bf5ff46429bf4100e5b17d82268f798910a1ba636665801373956014\" returns successfully" Sep 12 17:35:51.703697 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:51.727761 containerd[1601]: time="2025-09-12T17:35:51.727696847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pg5rh,Uid:45ffb3eb-a3d1-424f-9934-ed6fe54575da,Namespace:calico-system,Attempt:1,} returns sandbox id \"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40\"" Sep 12 17:35:52.034607 systemd-networkd[1269]: vxlan.calico: Gained IPv6LL Sep 12 17:35:52.077271 containerd[1601]: time="2025-09-12T17:35:52.077217238Z" level=info msg="CreateContainer within sandbox \"beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"81054f9a9701e95e8ceae345e0ba29dc7571ea1b761a2b11919a6f2c35d32646\"" Sep 12 17:35:52.078059 containerd[1601]: time="2025-09-12T17:35:52.077896228Z" level=info msg="StartContainer for \"81054f9a9701e95e8ceae345e0ba29dc7571ea1b761a2b11919a6f2c35d32646\"" Sep 12 17:35:52.426620 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:37138.service - OpenSSH per-connection server daemon (10.0.0.1:37138). Sep 12 17:35:52.483590 systemd-networkd[1269]: cali3fcd52487ae: Gained IPv6LL Sep 12 17:35:52.652036 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 37138 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:35:52.654088 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:52.658321 systemd-logind[1577]: New session 10 of user core. Sep 12 17:35:52.662707 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:35:52.674652 systemd-networkd[1269]: cali3079316426f: Gained IPv6LL Sep 12 17:35:52.679360 containerd[1601]: time="2025-09-12T17:35:52.679285206Z" level=info msg="StartContainer for \"81054f9a9701e95e8ceae345e0ba29dc7571ea1b761a2b11919a6f2c35d32646\" returns successfully" Sep 12 17:35:52.682218 kubelet[2724]: E0912 17:35:52.682004 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:52.948137 sshd[5329]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:52.952897 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:37138.service: Deactivated successfully. Sep 12 17:35:52.955560 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:35:52.955712 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:35:52.956730 systemd-logind[1577]: Removed session 10. Sep 12 17:35:53.037645 kubelet[2724]: I0912 17:35:53.037277 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kvzhb" podStartSLOduration=47.037254262 podStartE2EDuration="47.037254262s" podCreationTimestamp="2025-09-12 17:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:52.861957106 +0000 UTC m=+52.542558741" watchObservedRunningTime="2025-09-12 17:35:53.037254262 +0000 UTC m=+52.717855907" Sep 12 17:35:53.250682 systemd-networkd[1269]: cali43c7a3177a2: Gained IPv6LL Sep 12 17:35:53.425555 kubelet[2724]: E0912 17:35:53.425520 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:53.687379 kubelet[2724]: E0912 17:35:53.687336 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:54.691008 kubelet[2724]: E0912 17:35:54.690611 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:54.914092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401236926.mount: Deactivated successfully. Sep 12 17:35:56.375941 containerd[1601]: time="2025-09-12T17:35:56.375817630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:56.376981 containerd[1601]: time="2025-09-12T17:35:56.376841760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:35:56.377913 containerd[1601]: time="2025-09-12T17:35:56.377873936Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:56.380508 containerd[1601]: time="2025-09-12T17:35:56.380473107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:56.381359 containerd[1601]: time="2025-09-12T17:35:56.381332952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.68192318s" Sep 12 17:35:56.381438 containerd[1601]: time="2025-09-12T17:35:56.381362338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:35:56.382771 containerd[1601]: time="2025-09-12T17:35:56.382729168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:35:56.385153 containerd[1601]: time="2025-09-12T17:35:56.385089569Z" level=info msg="CreateContainer within sandbox \"e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:35:56.402889 containerd[1601]: time="2025-09-12T17:35:56.402811747Z" level=info msg="CreateContainer within sandbox \"e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7df7079d766849d3d88fdb70ad3236913c5cd0382f59f76d89624b97dfb56ab9\"" Sep 12 17:35:56.403620 containerd[1601]: time="2025-09-12T17:35:56.403566459Z" level=info msg="StartContainer for \"7df7079d766849d3d88fdb70ad3236913c5cd0382f59f76d89624b97dfb56ab9\"" Sep 12 17:35:56.486220 containerd[1601]: time="2025-09-12T17:35:56.486167015Z" level=info msg="StartContainer for \"7df7079d766849d3d88fdb70ad3236913c5cd0382f59f76d89624b97dfb56ab9\" returns successfully" Sep 12 17:35:57.960790 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:37146.service - OpenSSH per-connection server daemon (10.0.0.1:37146). Sep 12 17:35:58.007112 sshd[5463]: Accepted publickey for core from 10.0.0.1 port 37146 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:35:58.009528 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:58.015488 systemd-logind[1577]: New session 11 of user core. Sep 12 17:35:58.026090 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:35:58.177797 sshd[5463]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:58.183008 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:37146.service: Deactivated successfully. Sep 12 17:35:58.187378 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:35:58.191623 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:35:58.193257 systemd-logind[1577]: Removed session 11. Sep 12 17:36:00.401476 containerd[1601]: time="2025-09-12T17:36:00.401013712Z" level=info msg="StopPodSandbox for \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\"" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.620 [WARNING][5494] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0", GenerateName:"calico-kube-controllers-65ddc98f95-", Namespace:"calico-system", SelfLink:"", UID:"22dbceb8-3888-4c97-ae99-1a48c9c8116f", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65ddc98f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d", Pod:"calico-kube-controllers-65ddc98f95-bgcp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fcd52487ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.621 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.621 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" iface="eth0" netns="" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.621 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.621 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.653 [INFO][5505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.653 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.653 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.660 [WARNING][5505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.660 [INFO][5505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.662 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:00.670630 containerd[1601]: 2025-09-12 17:36:00.666 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.670630 containerd[1601]: time="2025-09-12T17:36:00.670517001Z" level=info msg="TearDown network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\" successfully" Sep 12 17:36:00.670630 containerd[1601]: time="2025-09-12T17:36:00.670564172Z" level=info msg="StopPodSandbox for \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\" returns successfully" Sep 12 17:36:00.671297 containerd[1601]: time="2025-09-12T17:36:00.671259597Z" level=info msg="RemovePodSandbox for \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\"" Sep 12 17:36:00.674119 containerd[1601]: time="2025-09-12T17:36:00.674068828Z" level=info msg="Forcibly stopping sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\"" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.765 [WARNING][5522] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0", GenerateName:"calico-kube-controllers-65ddc98f95-", Namespace:"calico-system", SelfLink:"", UID:"22dbceb8-3888-4c97-ae99-1a48c9c8116f", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65ddc98f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d", Pod:"calico-kube-controllers-65ddc98f95-bgcp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fcd52487ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.766 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.766 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" iface="eth0" netns="" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.766 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.766 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.798 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.799 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.799 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.809 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.809 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" HandleID="k8s-pod-network.6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Workload="localhost-k8s-calico--kube--controllers--65ddc98f95--bgcp8-eth0" Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.813 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:00.822185 containerd[1601]: 2025-09-12 17:36:00.818 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd" Sep 12 17:36:00.822885 containerd[1601]: time="2025-09-12T17:36:00.822237686Z" level=info msg="TearDown network for sandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\" successfully" Sep 12 17:36:01.648224 containerd[1601]: time="2025-09-12T17:36:01.648081280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:01.648813 containerd[1601]: time="2025-09-12T17:36:01.648541964Z" level=info msg="RemovePodSandbox \"6e071d5956819b4c229415b0175478182c5e39302ee903ad4c6c494047684bdd\" returns successfully" Sep 12 17:36:01.652328 containerd[1601]: time="2025-09-12T17:36:01.649728089Z" level=info msg="StopPodSandbox for \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\"" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.688 [WARNING][5554] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c01b22f9-f886-420b-b430-8234a928fb35", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d", Pod:"coredns-7c65d6cfc9-kvzhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43c7a3177a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.689 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.689 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" iface="eth0" netns="" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.689 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.689 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.716 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.716 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.716 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.734 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.734 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.737 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:01.743081 containerd[1601]: 2025-09-12 17:36:01.740 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.743670 containerd[1601]: time="2025-09-12T17:36:01.743621360Z" level=info msg="TearDown network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\" successfully" Sep 12 17:36:01.743670 containerd[1601]: time="2025-09-12T17:36:01.743652441Z" level=info msg="StopPodSandbox for \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\" returns successfully" Sep 12 17:36:01.744219 containerd[1601]: time="2025-09-12T17:36:01.744181214Z" level=info msg="RemovePodSandbox for \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\"" Sep 12 17:36:01.744268 containerd[1601]: time="2025-09-12T17:36:01.744233014Z" level=info msg="Forcibly stopping sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\"" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.812 [WARNING][5583] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c01b22f9-f886-420b-b430-8234a928fb35", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ae6b15113c40b161f1c59f6db8c7cd98b3e72ec8ec8f7db1426712a4fa005d", Pod:"coredns-7c65d6cfc9-kvzhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43c7a3177a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.813 [INFO][5583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.813 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" iface="eth0" netns="" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.813 [INFO][5583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.813 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.840 [INFO][5596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.840 [INFO][5596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.840 [INFO][5596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.849 [WARNING][5596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.849 [INFO][5596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" HandleID="k8s-pod-network.1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Workload="localhost-k8s-coredns--7c65d6cfc9--kvzhb-eth0" Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.850 [INFO][5596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:01.856376 containerd[1601]: 2025-09-12 17:36:01.853 [INFO][5583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151" Sep 12 17:36:01.856837 containerd[1601]: time="2025-09-12T17:36:01.856442576Z" level=info msg="TearDown network for sandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\" successfully" Sep 12 17:36:02.308664 containerd[1601]: time="2025-09-12T17:36:02.308562044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:02.308886 containerd[1601]: time="2025-09-12T17:36:02.308693055Z" level=info msg="RemovePodSandbox \"1c53b0048c2877e1dc0a7cd53c245ac5d0ccf59e6f98f93f2a18540394567151\" returns successfully" Sep 12 17:36:02.309432 containerd[1601]: time="2025-09-12T17:36:02.309377036Z" level=info msg="StopPodSandbox for \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\"" Sep 12 17:36:02.313203 containerd[1601]: time="2025-09-12T17:36:02.312946980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:02.314704 containerd[1601]: time="2025-09-12T17:36:02.314548410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:36:02.315999 containerd[1601]: time="2025-09-12T17:36:02.315956117Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:02.318986 containerd[1601]: time="2025-09-12T17:36:02.318952872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:02.319499 containerd[1601]: time="2025-09-12T17:36:02.319469873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.936692092s" Sep 12 17:36:02.319568 containerd[1601]: time="2025-09-12T17:36:02.319514508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:02.321756 containerd[1601]: time="2025-09-12T17:36:02.321727200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:36:02.323403 containerd[1601]: time="2025-09-12T17:36:02.322861975Z" level=info msg="CreateContainer within sandbox \"9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:02.349345 containerd[1601]: time="2025-09-12T17:36:02.349288721Z" level=info msg="CreateContainer within sandbox \"9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9def0ecf643079f28c6d0086561a00d7cedd01ca05daeae5367794d6185e4f2\"" Sep 12 17:36:02.350572 containerd[1601]: time="2025-09-12T17:36:02.350500173Z" level=info msg="StartContainer for \"b9def0ecf643079f28c6d0086561a00d7cedd01ca05daeae5367794d6185e4f2\"" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.355 [WARNING][5614] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"470162f6-a6a8-4102-8800-2ba86b4652d8", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e", Pod:"coredns-7c65d6cfc9-x5bfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb0a604231", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.356 [INFO][5614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.356 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" iface="eth0" netns="" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.356 [INFO][5614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.356 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.384 [INFO][5626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.384 [INFO][5626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.384 [INFO][5626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.393 [WARNING][5626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.393 [INFO][5626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.395 [INFO][5626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.402520 containerd[1601]: 2025-09-12 17:36:02.399 [INFO][5614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.403199 containerd[1601]: time="2025-09-12T17:36:02.402601272Z" level=info msg="TearDown network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\" successfully" Sep 12 17:36:02.403199 containerd[1601]: time="2025-09-12T17:36:02.402630268Z" level=info msg="StopPodSandbox for \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\" returns successfully" Sep 12 17:36:02.403355 containerd[1601]: time="2025-09-12T17:36:02.403273491Z" level=info msg="RemovePodSandbox for \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\"" Sep 12 17:36:02.403355 containerd[1601]: time="2025-09-12T17:36:02.403320912Z" level=info msg="Forcibly stopping sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\"" Sep 12 17:36:02.482587 containerd[1601]: time="2025-09-12T17:36:02.482523290Z" level=info msg="StartContainer for \"b9def0ecf643079f28c6d0086561a00d7cedd01ca05daeae5367794d6185e4f2\" returns successfully" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.451 [WARNING][5663] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"470162f6-a6a8-4102-8800-2ba86b4652d8", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"893abb7fbe020b7bdbeee9650e2f0bd298aed94e43f85aaf47b27069e0b7f49e", Pod:"coredns-7c65d6cfc9-x5bfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb0a604231", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.452 [INFO][5663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.452 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" iface="eth0" netns="" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.452 [INFO][5663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.452 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.485 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.485 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.485 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.492 [WARNING][5672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.492 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" HandleID="k8s-pod-network.7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Workload="localhost-k8s-coredns--7c65d6cfc9--x5bfz-eth0" Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.494 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.502453 containerd[1601]: 2025-09-12 17:36:02.497 [INFO][5663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad" Sep 12 17:36:02.502453 containerd[1601]: time="2025-09-12T17:36:02.501246490Z" level=info msg="TearDown network for sandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\" successfully" Sep 12 17:36:02.506950 containerd[1601]: time="2025-09-12T17:36:02.506878625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:02.507102 containerd[1601]: time="2025-09-12T17:36:02.506973357Z" level=info msg="RemovePodSandbox \"7af863ccd796e55355fd21d088f95c1386507ea6d52fa13520758842ac6662ad\" returns successfully" Sep 12 17:36:02.507693 containerd[1601]: time="2025-09-12T17:36:02.507653561Z" level=info msg="StopPodSandbox for \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\"" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.549 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"30910c1c-2d0b-4258-860d-9f0d09a1d5af", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a", Pod:"calico-apiserver-689c48fdcf-95pkx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid30283e9589", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.550 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.550 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" iface="eth0" netns="" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.550 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.550 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.572 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.572 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.572 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.580 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.580 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.581 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.587154 containerd[1601]: 2025-09-12 17:36:02.584 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.587627 containerd[1601]: time="2025-09-12T17:36:02.587152257Z" level=info msg="TearDown network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\" successfully" Sep 12 17:36:02.587627 containerd[1601]: time="2025-09-12T17:36:02.587186773Z" level=info msg="StopPodSandbox for \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\" returns successfully" Sep 12 17:36:02.588088 containerd[1601]: time="2025-09-12T17:36:02.588024539Z" level=info msg="RemovePodSandbox for \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\"" Sep 12 17:36:02.588551 containerd[1601]: time="2025-09-12T17:36:02.588150420Z" level=info msg="Forcibly stopping sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\"" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.630 [WARNING][5732] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"30910c1c-2d0b-4258-860d-9f0d09a1d5af", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9774d557625e75dae197ecb0e76b09b6bbe9c683e242e243a96eded8d5bf867a", Pod:"calico-apiserver-689c48fdcf-95pkx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid30283e9589", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.630 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.630 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" iface="eth0" netns="" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.630 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.630 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.658 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.658 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.658 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.666 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.666 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" HandleID="k8s-pod-network.88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Workload="localhost-k8s-calico--apiserver--689c48fdcf--95pkx-eth0" Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.669 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.676920 containerd[1601]: 2025-09-12 17:36:02.673 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801" Sep 12 17:36:02.678454 containerd[1601]: time="2025-09-12T17:36:02.677045658Z" level=info msg="TearDown network for sandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\" successfully" Sep 12 17:36:02.681818 containerd[1601]: time="2025-09-12T17:36:02.681767018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:02.681888 containerd[1601]: time="2025-09-12T17:36:02.681832704Z" level=info msg="RemovePodSandbox \"88f87b109371f9433ff0a17f37c3f69dd3d7db82e1d1109135c6b229fdb43801\" returns successfully" Sep 12 17:36:02.682573 containerd[1601]: time="2025-09-12T17:36:02.682503640Z" level=info msg="StopPodSandbox for \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\"" Sep 12 17:36:02.734513 kubelet[2724]: I0912 17:36:02.734258 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-689c48fdcf-95pkx" podStartSLOduration=31.630006376 podStartE2EDuration="46.734091786s" podCreationTimestamp="2025-09-12 17:35:16 +0000 UTC" firstStartedPulling="2025-09-12 17:35:47.216510552 +0000 UTC m=+46.897112187" lastFinishedPulling="2025-09-12 17:36:02.320595962 +0000 UTC m=+62.001197597" observedRunningTime="2025-09-12 17:36:02.732495757 +0000 UTC m=+62.413097392" watchObservedRunningTime="2025-09-12 17:36:02.734091786 +0000 UTC m=+62.414693431" Sep 12 17:36:02.737938 kubelet[2724]: I0912 17:36:02.737184 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-qcj2j" podStartSLOduration=35.499467584 podStartE2EDuration="44.737152914s" podCreationTimestamp="2025-09-12 17:35:18 +0000 UTC" firstStartedPulling="2025-09-12 17:35:47.144910872 +0000 UTC m=+46.825512507" lastFinishedPulling="2025-09-12 17:35:56.382596192 +0000 UTC m=+56.063197837" observedRunningTime="2025-09-12 17:35:56.713915197 +0000 UTC m=+56.394516832" watchObservedRunningTime="2025-09-12 17:36:02.737152914 +0000 UTC m=+62.417754559" Sep 12 17:36:02.758464 containerd[1601]: time="2025-09-12T17:36:02.757023162Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:02.758641 containerd[1601]: time="2025-09-12T17:36:02.758553745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:36:02.760599 containerd[1601]: time="2025-09-12T17:36:02.760553630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 438.796352ms" Sep 12 17:36:02.760599 containerd[1601]: time="2025-09-12T17:36:02.760584999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:02.764456 containerd[1601]: time="2025-09-12T17:36:02.764227702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:36:02.767267 containerd[1601]: time="2025-09-12T17:36:02.767167026Z" level=info msg="CreateContainer within sandbox \"16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.723 [WARNING][5757] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" WorkloadEndpoint="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.723 [INFO][5757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.723 [INFO][5757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" iface="eth0" netns="" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.723 [INFO][5757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.723 [INFO][5757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.755 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.755 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.755 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.763 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.763 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.764 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.775611 containerd[1601]: 2025-09-12 17:36:02.772 [INFO][5757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.776281 containerd[1601]: time="2025-09-12T17:36:02.775667089Z" level=info msg="TearDown network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\" successfully" Sep 12 17:36:02.776281 containerd[1601]: time="2025-09-12T17:36:02.775699442Z" level=info msg="StopPodSandbox for \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\" returns successfully" Sep 12 17:36:02.776350 containerd[1601]: time="2025-09-12T17:36:02.776287749Z" level=info msg="RemovePodSandbox for \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\"" Sep 12 17:36:02.776350 containerd[1601]: time="2025-09-12T17:36:02.776318488Z" level=info msg="Forcibly stopping sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\"" Sep 12 17:36:02.786862 containerd[1601]: time="2025-09-12T17:36:02.786438436Z" level=info msg="CreateContainer within sandbox \"16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c13331091059a185c87b7285c1e61b3c1b79b7b297fd79a257c566a0307e778\"" Sep 12 17:36:02.790689 containerd[1601]: time="2025-09-12T17:36:02.790457931Z" level=info msg="StartContainer for \"1c13331091059a185c87b7285c1e61b3c1b79b7b297fd79a257c566a0307e778\"" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.828 [WARNING][5785] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" WorkloadEndpoint="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.829 [INFO][5785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.829 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" iface="eth0" netns="" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.829 [INFO][5785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.829 [INFO][5785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.865 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.865 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.865 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.872 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.872 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" HandleID="k8s-pod-network.e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Workload="localhost-k8s-whisker--5c45c567bb--fg9p6-eth0" Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.874 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.881198 containerd[1601]: 2025-09-12 17:36:02.877 [INFO][5785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b" Sep 12 17:36:02.933046 containerd[1601]: time="2025-09-12T17:36:02.881654800Z" level=info msg="TearDown network for sandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\" successfully" Sep 12 17:36:03.147466 containerd[1601]: time="2025-09-12T17:36:03.147010227Z" level=info msg="StartContainer for \"1c13331091059a185c87b7285c1e61b3c1b79b7b297fd79a257c566a0307e778\" returns successfully" Sep 12 17:36:03.151029 containerd[1601]: time="2025-09-12T17:36:03.150870573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:03.151029 containerd[1601]: time="2025-09-12T17:36:03.150926329Z" level=info msg="RemovePodSandbox \"e7427c026c4fae9d1ea4276871427eb85fe17a6a5ce556770456aba985b7ae8b\" returns successfully" Sep 12 17:36:03.151594 containerd[1601]: time="2025-09-12T17:36:03.151366032Z" level=info msg="StopPodSandbox for \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\"" Sep 12 17:36:03.190632 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:33070.service - OpenSSH per-connection server daemon (10.0.0.1:33070). Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.201 [WARNING][5850] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--qcj2j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d962edc-2c93-4f13-a932-db0a9095910d", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0", Pod:"goldmane-7988f88666-qcj2j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1004327243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.201 [INFO][5850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.202 [INFO][5850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" iface="eth0" netns="" Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.202 [INFO][5850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.202 [INFO][5850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.228 [INFO][5859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.229 [INFO][5859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.229 [INFO][5859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.238 [WARNING][5859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.239 [INFO][5859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.241 [INFO][5859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:03.249345 containerd[1601]: 2025-09-12 17:36:03.246 [INFO][5850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.251051 containerd[1601]: time="2025-09-12T17:36:03.249385004Z" level=info msg="TearDown network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\" successfully" Sep 12 17:36:03.251051 containerd[1601]: time="2025-09-12T17:36:03.249445219Z" level=info msg="StopPodSandbox for \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\" returns successfully" Sep 12 17:36:03.251673 containerd[1601]: time="2025-09-12T17:36:03.251324550Z" level=info msg="RemovePodSandbox for \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\"" Sep 12 17:36:03.251673 containerd[1601]: time="2025-09-12T17:36:03.251363123Z" level=info msg="Forcibly stopping sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\"" Sep 12 17:36:03.255026 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 33070 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:03.256916 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:03.265031 systemd-logind[1577]: New session 12 of user core. Sep 12 17:36:03.270894 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.291 [WARNING][5878] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--qcj2j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d962edc-2c93-4f13-a932-db0a9095910d", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1714ec4ef1b8c074413274d939e8b0ad5f8d029a47d7b735fb0546e94bdbaa0", Pod:"goldmane-7988f88666-qcj2j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1004327243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.291 [INFO][5878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.291 [INFO][5878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" iface="eth0" netns="" Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.291 [INFO][5878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.291 [INFO][5878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.324 [INFO][5888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.324 [INFO][5888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.324 [INFO][5888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.336 [WARNING][5888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.336 [INFO][5888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" HandleID="k8s-pod-network.3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Workload="localhost-k8s-goldmane--7988f88666--qcj2j-eth0" Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.340 [INFO][5888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:03.355860 containerd[1601]: 2025-09-12 17:36:03.347 [INFO][5878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614" Sep 12 17:36:03.356449 containerd[1601]: time="2025-09-12T17:36:03.356129912Z" level=info msg="TearDown network for sandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\" successfully" Sep 12 17:36:03.364635 systemd-journald[1163]: Under memory pressure, flushing caches. Sep 12 17:36:03.362483 systemd-resolved[1480]: Under memory pressure, flushing caches. Sep 12 17:36:03.362512 systemd-resolved[1480]: Flushed all caches. Sep 12 17:36:03.369328 containerd[1601]: time="2025-09-12T17:36:03.369232544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:03.369518 containerd[1601]: time="2025-09-12T17:36:03.369357714Z" level=info msg="RemovePodSandbox \"3c7b356b09da4514174b94434d7c2ed5ab3a4511b5c4345461b8fd76a5054614\" returns successfully" Sep 12 17:36:03.370502 containerd[1601]: time="2025-09-12T17:36:03.370466278Z" level=info msg="StopPodSandbox for \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\"" Sep 12 17:36:03.732190 kubelet[2724]: I0912 17:36:03.732152 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.483 [WARNING][5914] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7b2ca51-b8fe-48dd-93cf-b98746e57dea", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499", Pod:"calico-apiserver-689c48fdcf-qf7qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83992928d1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.484 [INFO][5914] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.484 [INFO][5914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" iface="eth0" netns="" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.484 [INFO][5914] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.484 [INFO][5914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.513 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.513 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.513 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.750 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.750 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.752 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:03.759134 containerd[1601]: 2025-09-12 17:36:03.755 [INFO][5914] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:03.759979 containerd[1601]: time="2025-09-12T17:36:03.759190112Z" level=info msg="TearDown network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\" successfully" Sep 12 17:36:03.759979 containerd[1601]: time="2025-09-12T17:36:03.759222986Z" level=info msg="StopPodSandbox for \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\" returns successfully" Sep 12 17:36:03.759979 containerd[1601]: time="2025-09-12T17:36:03.759715659Z" level=info msg="RemovePodSandbox for \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\"" Sep 12 17:36:03.759979 containerd[1601]: time="2025-09-12T17:36:03.759747210Z" level=info msg="Forcibly stopping sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\"" Sep 12 17:36:03.781211 sshd[5856]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:03.790673 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:33078.service - OpenSSH per-connection server daemon (10.0.0.1:33078). Sep 12 17:36:03.793460 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:33070.service: Deactivated successfully. Sep 12 17:36:03.799576 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:36:03.800959 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:36:03.808099 systemd-logind[1577]: Removed session 12. Sep 12 17:36:03.827235 sshd[5970]: Accepted publickey for core from 10.0.0.1 port 33078 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:03.829133 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:03.833834 systemd-logind[1577]: New session 13 of user core. Sep 12 17:36:03.845671 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:36:04.198350 kubelet[2724]: I0912 17:36:04.197492 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-689c48fdcf-qf7qr" podStartSLOduration=36.093009165 podStartE2EDuration="48.197392455s" podCreationTimestamp="2025-09-12 17:35:16 +0000 UTC" firstStartedPulling="2025-09-12 17:35:50.657288161 +0000 UTC m=+50.337889797" lastFinishedPulling="2025-09-12 17:36:02.761671462 +0000 UTC m=+62.442273087" observedRunningTime="2025-09-12 17:36:04.197145282 +0000 UTC m=+63.877746937" watchObservedRunningTime="2025-09-12 17:36:04.197392455 +0000 UTC m=+63.877994100" Sep 12 17:36:04.217773 sshd[5970]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:04.237149 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:33082.service - OpenSSH per-connection server daemon (10.0.0.1:33082). Sep 12 17:36:04.240188 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:33078.service: Deactivated successfully. Sep 12 17:36:04.247695 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:36:04.262596 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:36:04.268522 systemd-logind[1577]: Removed session 13. Sep 12 17:36:04.318459 sshd[5995]: Accepted publickey for core from 10.0.0.1 port 33082 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.187 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0", GenerateName:"calico-apiserver-689c48fdcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7b2ca51-b8fe-48dd-93cf-b98746e57dea", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689c48fdcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16f44d98e15b12c316dbd4deec327dfca92ceb4e5cb8dbd713185bc62b685499", Pod:"calico-apiserver-689c48fdcf-qf7qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83992928d1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.189 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.189 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" iface="eth0" netns="" Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.189 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.189 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.292 [INFO][5988] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.292 [INFO][5988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.292 [INFO][5988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.299 [WARNING][5988] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.300 [INFO][5988] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" HandleID="k8s-pod-network.06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Workload="localhost-k8s-calico--apiserver--689c48fdcf--qf7qr-eth0" Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.306 [INFO][5988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:04.320232 containerd[1601]: 2025-09-12 17:36:04.315 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0" Sep 12 17:36:04.321980 containerd[1601]: time="2025-09-12T17:36:04.320296895Z" level=info msg="TearDown network for sandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\" successfully" Sep 12 17:36:04.321541 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:04.326555 containerd[1601]: time="2025-09-12T17:36:04.326505382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:04.326613 containerd[1601]: time="2025-09-12T17:36:04.326576979Z" level=info msg="RemovePodSandbox \"06c47222ada664aaa0370a91b736026294f5477958f55cd163f35aac26cff9d0\" returns successfully" Sep 12 17:36:04.327194 containerd[1601]: time="2025-09-12T17:36:04.327127995Z" level=info msg="StopPodSandbox for \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\"" Sep 12 17:36:04.328107 systemd-logind[1577]: New session 14 of user core. Sep 12 17:36:04.337195 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.383 [WARNING][6013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pg5rh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45ffb3eb-a3d1-424f-9934-ed6fe54575da", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40", Pod:"csi-node-driver-pg5rh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3079316426f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.383 [INFO][6013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.383 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" iface="eth0" netns="" Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.383 [INFO][6013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.383 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.423 [INFO][6023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.424 [INFO][6023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.424 [INFO][6023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.431 [WARNING][6023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.431 [INFO][6023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.433 [INFO][6023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:04.442010 containerd[1601]: 2025-09-12 17:36:04.436 [INFO][6013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.443041 containerd[1601]: time="2025-09-12T17:36:04.442044118Z" level=info msg="TearDown network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\" successfully" Sep 12 17:36:04.443041 containerd[1601]: time="2025-09-12T17:36:04.442072031Z" level=info msg="StopPodSandbox for \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\" returns successfully" Sep 12 17:36:04.443129 containerd[1601]: time="2025-09-12T17:36:04.443045535Z" level=info msg="RemovePodSandbox for \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\"" Sep 12 17:36:04.443129 containerd[1601]: time="2025-09-12T17:36:04.443069661Z" level=info msg="Forcibly stopping sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\"" Sep 12 17:36:04.543002 sshd[5995]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:04.552771 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:33082.service: Deactivated successfully. Sep 12 17:36:04.557390 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:36:04.557731 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:36:04.561191 systemd-logind[1577]: Removed session 14. Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.516 [WARNING][6047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pg5rh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45ffb3eb-a3d1-424f-9934-ed6fe54575da", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40", Pod:"csi-node-driver-pg5rh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3079316426f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.516 [INFO][6047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.516 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" iface="eth0" netns="" Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.516 [INFO][6047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.516 [INFO][6047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.575 [INFO][6055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.576 [INFO][6055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.577 [INFO][6055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.592 [WARNING][6055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.592 [INFO][6055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" HandleID="k8s-pod-network.f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Workload="localhost-k8s-csi--node--driver--pg5rh-eth0" Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.594 [INFO][6055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:04.601868 containerd[1601]: 2025-09-12 17:36:04.598 [INFO][6047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176" Sep 12 17:36:04.602372 containerd[1601]: time="2025-09-12T17:36:04.601933222Z" level=info msg="TearDown network for sandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\" successfully" Sep 12 17:36:04.644607 containerd[1601]: time="2025-09-12T17:36:04.644555332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:04.644760 containerd[1601]: time="2025-09-12T17:36:04.644637560Z" level=info msg="RemovePodSandbox \"f5e74d7db8a834b3e89ad6661988fce9646c4b9b471373b2e4e3bf37e5e9b176\" returns successfully" Sep 12 17:36:04.735549 kubelet[2724]: I0912 17:36:04.735513 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:06.641780 containerd[1601]: time="2025-09-12T17:36:06.641710629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:06.642809 containerd[1601]: time="2025-09-12T17:36:06.642723016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:36:06.644036 containerd[1601]: time="2025-09-12T17:36:06.644008866Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:06.646528 containerd[1601]: time="2025-09-12T17:36:06.646468090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:06.647102 containerd[1601]: time="2025-09-12T17:36:06.647073628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.882789708s" Sep 12 17:36:06.647172 containerd[1601]: time="2025-09-12T17:36:06.647105999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:36:06.648528 containerd[1601]: time="2025-09-12T17:36:06.648499906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:36:06.658677 containerd[1601]: time="2025-09-12T17:36:06.658614592Z" level=info msg="CreateContainer within sandbox \"01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:36:06.687447 containerd[1601]: time="2025-09-12T17:36:06.687351630Z" level=info msg="CreateContainer within sandbox \"01cefae7c3ef622e18e5cb0bb9fe4378299ff014d85087adc572da9567d4ed4d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d8a3db970c96259a2c08abcd59f4abf4afebd1b19b23783748fc9d3fb31b9806\"" Sep 12 17:36:06.688044 containerd[1601]: time="2025-09-12T17:36:06.688002515Z" level=info msg="StartContainer for \"d8a3db970c96259a2c08abcd59f4abf4afebd1b19b23783748fc9d3fb31b9806\"" Sep 12 17:36:06.864300 containerd[1601]: time="2025-09-12T17:36:06.863963325Z" level=info msg="StartContainer for \"d8a3db970c96259a2c08abcd59f4abf4afebd1b19b23783748fc9d3fb31b9806\" returns successfully" Sep 12 17:36:07.860661 kubelet[2724]: I0912 17:36:07.860355 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65ddc98f95-bgcp8" podStartSLOduration=33.600353371 podStartE2EDuration="48.86032882s" podCreationTimestamp="2025-09-12 17:35:19 +0000 UTC" firstStartedPulling="2025-09-12 17:35:51.388020613 +0000 UTC m=+51.068622258" lastFinishedPulling="2025-09-12 17:36:06.647996072 +0000 UTC m=+66.328597707" observedRunningTime="2025-09-12 17:36:07.859692223 +0000 UTC m=+67.540293858" watchObservedRunningTime="2025-09-12 17:36:07.86032882 +0000 UTC m=+67.540930455" Sep 12 17:36:09.554732 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:33090.service - OpenSSH per-connection server daemon (10.0.0.1:33090). Sep 12 17:36:10.086296 sshd[6140]: Accepted publickey for core from 10.0.0.1 port 33090 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:10.088745 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:10.094326 systemd-logind[1577]: New session 15 of user core. Sep 12 17:36:10.098688 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:36:10.650829 sshd[6140]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:10.655125 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:33090.service: Deactivated successfully. Sep 12 17:36:10.658064 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:36:10.659020 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:36:10.660094 systemd-logind[1577]: Removed session 15. Sep 12 17:36:12.967338 containerd[1601]: time="2025-09-12T17:36:12.967260815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:13.014531 containerd[1601]: time="2025-09-12T17:36:13.014482572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:36:13.063926 containerd[1601]: time="2025-09-12T17:36:13.063874569Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:13.151492 containerd[1601]: time="2025-09-12T17:36:13.151423844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:13.152104 containerd[1601]: time="2025-09-12T17:36:13.152071398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 6.503539171s" Sep 12 17:36:13.152104 containerd[1601]: time="2025-09-12T17:36:13.152102297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:36:13.153645 containerd[1601]: time="2025-09-12T17:36:13.153351339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:36:13.158462 containerd[1601]: time="2025-09-12T17:36:13.155865944Z" level=info msg="CreateContainer within sandbox \"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:36:14.114639 containerd[1601]: time="2025-09-12T17:36:14.114574243Z" level=info msg="CreateContainer within sandbox \"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bef0299c24f5d8451bba844904c1df7ea22007d3d43a597c2ca6ce57ca20be70\"" Sep 12 17:36:14.115249 containerd[1601]: time="2025-09-12T17:36:14.115209213Z" level=info msg="StartContainer for \"bef0299c24f5d8451bba844904c1df7ea22007d3d43a597c2ca6ce57ca20be70\"" Sep 12 17:36:14.227949 containerd[1601]: time="2025-09-12T17:36:14.227891045Z" level=info msg="StartContainer for \"bef0299c24f5d8451bba844904c1df7ea22007d3d43a597c2ca6ce57ca20be70\" returns successfully" Sep 12 17:36:15.421021 kubelet[2724]: E0912 17:36:15.420971 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:15.663821 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:59056.service - OpenSSH per-connection server daemon (10.0.0.1:59056). Sep 12 17:36:15.698918 sshd[6209]: Accepted publickey for core from 10.0.0.1 port 59056 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:15.701244 sshd[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:15.706307 systemd-logind[1577]: New session 16 of user core. Sep 12 17:36:15.717822 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:36:15.902743 sshd[6209]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:15.909971 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:59056.service: Deactivated successfully. Sep 12 17:36:15.914131 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:36:15.914460 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:36:15.917632 systemd-logind[1577]: Removed session 16. Sep 12 17:36:17.590536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779151527.mount: Deactivated successfully. Sep 12 17:36:18.767998 kubelet[2724]: I0912 17:36:18.767933 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:19.200579 containerd[1601]: time="2025-09-12T17:36:19.200137441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:19.247780 containerd[1601]: time="2025-09-12T17:36:19.247693629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:36:19.304403 containerd[1601]: time="2025-09-12T17:36:19.304292663Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:19.386782 containerd[1601]: time="2025-09-12T17:36:19.386718081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:19.387666 containerd[1601]: time="2025-09-12T17:36:19.387617684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 6.234219174s" Sep 12 17:36:19.387754 containerd[1601]: time="2025-09-12T17:36:19.387674511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:36:19.389215 containerd[1601]: time="2025-09-12T17:36:19.389007447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:36:19.390603 containerd[1601]: time="2025-09-12T17:36:19.390578286Z" level=info msg="CreateContainer within sandbox \"beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:36:19.443669 systemd-journald[1163]: Under memory pressure, flushing caches. Sep 12 17:36:19.426971 systemd-resolved[1480]: Under memory pressure, flushing caches. Sep 12 17:36:19.427009 systemd-resolved[1480]: Flushed all caches. Sep 12 17:36:19.851625 systemd[1]: run-containerd-runc-k8s.io-7df7079d766849d3d88fdb70ad3236913c5cd0382f59f76d89624b97dfb56ab9-runc.UWR8Gp.mount: Deactivated successfully. Sep 12 17:36:20.890629 containerd[1601]: time="2025-09-12T17:36:20.890563794Z" level=info msg="CreateContainer within sandbox \"beca6d537695d04890dcdacbb502a0a7fb922201845c0325617f264c54aeba88\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4d53d2c75686c810ba06a678677fa8d1d40e62abfb04ffff431b8ad4cb7bf99c\"" Sep 12 17:36:20.891554 containerd[1601]: time="2025-09-12T17:36:20.891436253Z" level=info msg="StartContainer for \"4d53d2c75686c810ba06a678677fa8d1d40e62abfb04ffff431b8ad4cb7bf99c\"" Sep 12 17:36:20.963007 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:33662.service - OpenSSH per-connection server daemon (10.0.0.1:33662). Sep 12 17:36:21.025513 sshd[6279]: Accepted publickey for core from 10.0.0.1 port 33662 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:21.027686 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:21.034325 systemd-logind[1577]: New session 17 of user core. Sep 12 17:36:21.040925 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:36:21.213382 containerd[1601]: time="2025-09-12T17:36:21.213233924Z" level=info msg="StartContainer for \"4d53d2c75686c810ba06a678677fa8d1d40e62abfb04ffff431b8ad4cb7bf99c\" returns successfully" Sep 12 17:36:21.474535 systemd-resolved[1480]: Under memory pressure, flushing caches. Sep 12 17:36:21.474557 systemd-resolved[1480]: Flushed all caches. Sep 12 17:36:21.476489 systemd-journald[1163]: Under memory pressure, flushing caches. Sep 12 17:36:21.560821 sshd[6279]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:21.565860 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:36:21.566124 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:33662.service: Deactivated successfully. Sep 12 17:36:21.568940 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:36:21.569826 systemd-logind[1577]: Removed session 17. Sep 12 17:36:21.988196 kubelet[2724]: I0912 17:36:21.987447 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-596795f569-zlqt9" podStartSLOduration=4.174340696 podStartE2EDuration="36.987428235s" podCreationTimestamp="2025-09-12 17:35:45 +0000 UTC" firstStartedPulling="2025-09-12 17:35:46.575735227 +0000 UTC m=+46.256336862" lastFinishedPulling="2025-09-12 17:36:19.388822766 +0000 UTC m=+79.069424401" observedRunningTime="2025-09-12 17:36:21.987045598 +0000 UTC m=+81.667647243" watchObservedRunningTime="2025-09-12 17:36:21.987428235 +0000 UTC m=+81.668029860" Sep 12 17:36:23.806323 containerd[1601]: time="2025-09-12T17:36:23.806270576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:23.808098 containerd[1601]: time="2025-09-12T17:36:23.807985394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:36:23.811974 containerd[1601]: time="2025-09-12T17:36:23.811917707Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:23.814973 containerd[1601]: time="2025-09-12T17:36:23.814924630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:23.815579 containerd[1601]: time="2025-09-12T17:36:23.815535070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.426471106s" Sep 12 17:36:23.815645 containerd[1601]: time="2025-09-12T17:36:23.815580506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:36:23.818147 containerd[1601]: time="2025-09-12T17:36:23.817970929Z" level=info msg="CreateContainer within sandbox \"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:36:23.849253 containerd[1601]: time="2025-09-12T17:36:23.849200771Z" level=info msg="CreateContainer within sandbox \"d0aaeec58b9c6a587cc00283942b41a3cb13ec14f87aaae0dbb1b3ed37f53d40\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"07d86788c96c4eb73521ff0dd5830145938b62c41cfb10b051a4b564298da9fc\"" Sep 12 17:36:23.849928 containerd[1601]: time="2025-09-12T17:36:23.849898548Z" level=info msg="StartContainer for \"07d86788c96c4eb73521ff0dd5830145938b62c41cfb10b051a4b564298da9fc\"" Sep 12 17:36:23.917349 containerd[1601]: time="2025-09-12T17:36:23.917291646Z" level=info msg="StartContainer for \"07d86788c96c4eb73521ff0dd5830145938b62c41cfb10b051a4b564298da9fc\" returns successfully" Sep 12 17:36:24.421434 kubelet[2724]: E0912 17:36:24.421364 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:24.717196 kubelet[2724]: I0912 17:36:24.717063 2724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:36:24.721274 kubelet[2724]: I0912 17:36:24.721242 2724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:36:25.421443 kubelet[2724]: E0912 17:36:25.421378 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:26.576655 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:33668.service - OpenSSH per-connection server daemon (10.0.0.1:33668). Sep 12 17:36:26.625458 sshd[6374]: Accepted publickey for core from 10.0.0.1 port 33668 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:26.627603 sshd[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:26.633006 systemd-logind[1577]: New session 18 of user core. Sep 12 17:36:26.639860 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:36:26.895690 sshd[6374]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:26.901067 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:33668.service: Deactivated successfully. Sep 12 17:36:26.903974 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:36:26.904831 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:36:26.905847 systemd-logind[1577]: Removed session 18. Sep 12 17:36:31.142225 kubelet[2724]: I0912 17:36:31.142170 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:31.291141 kubelet[2724]: I0912 17:36:31.290613 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pg5rh" podStartSLOduration=40.204200355 podStartE2EDuration="1m12.290574848s" podCreationTimestamp="2025-09-12 17:35:19 +0000 UTC" firstStartedPulling="2025-09-12 17:35:51.729899291 +0000 UTC m=+51.410500916" lastFinishedPulling="2025-09-12 17:36:23.816273784 +0000 UTC m=+83.496875409" observedRunningTime="2025-09-12 17:36:24.970310859 +0000 UTC m=+84.650912494" watchObservedRunningTime="2025-09-12 17:36:31.290574848 +0000 UTC m=+90.971176483" Sep 12 17:36:31.921850 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:55576.service - OpenSSH per-connection server daemon (10.0.0.1:55576). Sep 12 17:36:31.948803 sshd[6401]: Accepted publickey for core from 10.0.0.1 port 55576 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:31.950683 sshd[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:31.959582 systemd-logind[1577]: New session 19 of user core. Sep 12 17:36:31.970711 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:36:32.127673 sshd[6401]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:32.133020 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:55576.service: Deactivated successfully. Sep 12 17:36:32.136164 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:36:32.136280 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:36:32.137535 systemd-logind[1577]: Removed session 19. Sep 12 17:36:37.139722 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:55590.service - OpenSSH per-connection server daemon (10.0.0.1:55590). Sep 12 17:36:37.168264 sshd[6458]: Accepted publickey for core from 10.0.0.1 port 55590 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:37.171134 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:37.175580 systemd-logind[1577]: New session 20 of user core. Sep 12 17:36:37.182726 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:36:37.343705 sshd[6458]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:37.355668 systemd[1]: Started sshd@20-10.0.0.72:22-10.0.0.1:55602.service - OpenSSH per-connection server daemon (10.0.0.1:55602). Sep 12 17:36:37.356185 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:55590.service: Deactivated successfully. Sep 12 17:36:37.360165 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:36:37.361273 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:36:37.362890 systemd-logind[1577]: Removed session 20. Sep 12 17:36:37.397357 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 55602 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:37.399549 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:37.404782 systemd-logind[1577]: New session 21 of user core. Sep 12 17:36:37.414736 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:36:37.829309 sshd[6471]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:37.836664 systemd[1]: Started sshd@21-10.0.0.72:22-10.0.0.1:55616.service - OpenSSH per-connection server daemon (10.0.0.1:55616). Sep 12 17:36:37.837314 systemd[1]: sshd@20-10.0.0.72:22-10.0.0.1:55602.service: Deactivated successfully. Sep 12 17:36:37.841316 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:36:37.841796 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:36:37.843735 systemd-logind[1577]: Removed session 21. Sep 12 17:36:37.873592 sshd[6484]: Accepted publickey for core from 10.0.0.1 port 55616 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:37.875455 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:37.880040 systemd-logind[1577]: New session 22 of user core. Sep 12 17:36:37.893711 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:36:40.203158 sshd[6484]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:40.212907 systemd[1]: Started sshd@22-10.0.0.72:22-10.0.0.1:47356.service - OpenSSH per-connection server daemon (10.0.0.1:47356). Sep 12 17:36:40.213658 systemd[1]: sshd@21-10.0.0.72:22-10.0.0.1:55616.service: Deactivated successfully. Sep 12 17:36:40.219134 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:36:40.219532 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:36:40.224789 systemd-logind[1577]: Removed session 22. Sep 12 17:36:40.275122 sshd[6527]: Accepted publickey for core from 10.0.0.1 port 47356 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:40.276845 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:40.281530 systemd-logind[1577]: New session 23 of user core. Sep 12 17:36:40.292702 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:36:40.806339 sshd[6527]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:40.817782 systemd[1]: Started sshd@23-10.0.0.72:22-10.0.0.1:47370.service - OpenSSH per-connection server daemon (10.0.0.1:47370). Sep 12 17:36:40.818541 systemd[1]: sshd@22-10.0.0.72:22-10.0.0.1:47356.service: Deactivated successfully. Sep 12 17:36:40.824283 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:36:40.825430 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:36:40.827002 systemd-logind[1577]: Removed session 23. Sep 12 17:36:40.847256 sshd[6541]: Accepted publickey for core from 10.0.0.1 port 47370 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:40.849373 sshd[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:40.855125 systemd-logind[1577]: New session 24 of user core. Sep 12 17:36:40.864790 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:36:40.992590 sshd[6541]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:40.998756 systemd[1]: sshd@23-10.0.0.72:22-10.0.0.1:47370.service: Deactivated successfully. Sep 12 17:36:41.001884 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:36:41.002052 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:36:41.003577 systemd-logind[1577]: Removed session 24. Sep 12 17:36:41.421211 kubelet[2724]: E0912 17:36:41.421152 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:46.003820 systemd[1]: Started sshd@24-10.0.0.72:22-10.0.0.1:47386.service - OpenSSH per-connection server daemon (10.0.0.1:47386). Sep 12 17:36:46.033034 sshd[6562]: Accepted publickey for core from 10.0.0.1 port 47386 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:46.035080 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:46.039830 systemd-logind[1577]: New session 25 of user core. Sep 12 17:36:46.046890 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:36:46.159096 sshd[6562]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:46.164074 systemd[1]: sshd@24-10.0.0.72:22-10.0.0.1:47386.service: Deactivated successfully. Sep 12 17:36:46.167301 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:36:46.167375 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:36:46.168607 systemd-logind[1577]: Removed session 25. Sep 12 17:36:51.175718 systemd[1]: Started sshd@25-10.0.0.72:22-10.0.0.1:48214.service - OpenSSH per-connection server daemon (10.0.0.1:48214). Sep 12 17:36:51.227605 sshd[6600]: Accepted publickey for core from 10.0.0.1 port 48214 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:51.229865 sshd[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:51.234803 systemd-logind[1577]: New session 26 of user core. Sep 12 17:36:51.244704 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:36:51.408008 sshd[6600]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:51.412855 systemd[1]: sshd@25-10.0.0.72:22-10.0.0.1:48214.service: Deactivated successfully. Sep 12 17:36:51.415627 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:36:51.416437 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:36:51.417372 systemd-logind[1577]: Removed session 26. Sep 12 17:36:56.422787 systemd[1]: Started sshd@26-10.0.0.72:22-10.0.0.1:48226.service - OpenSSH per-connection server daemon (10.0.0.1:48226). Sep 12 17:36:56.467441 sshd[6616]: Accepted publickey for core from 10.0.0.1 port 48226 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:36:56.469099 sshd[6616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:56.474050 systemd-logind[1577]: New session 27 of user core. Sep 12 17:36:56.481890 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:36:56.877990 sshd[6616]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:56.882887 systemd[1]: sshd@26-10.0.0.72:22-10.0.0.1:48226.service: Deactivated successfully. Sep 12 17:36:56.887339 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:36:56.890712 systemd-logind[1577]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:36:56.892266 systemd-logind[1577]: Removed session 27. Sep 12 17:37:01.887725 systemd[1]: Started sshd@27-10.0.0.72:22-10.0.0.1:39260.service - OpenSSH per-connection server daemon (10.0.0.1:39260). Sep 12 17:37:01.927539 sshd[6653]: Accepted publickey for core from 10.0.0.1 port 39260 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:37:01.929338 sshd[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:01.934305 systemd-logind[1577]: New session 28 of user core. Sep 12 17:37:01.945949 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:37:02.178963 sshd[6653]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:02.183555 systemd[1]: sshd@27-10.0.0.72:22-10.0.0.1:39260.service: Deactivated successfully. Sep 12 17:37:02.186151 systemd-logind[1577]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:37:02.186218 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:37:02.187223 systemd-logind[1577]: Removed session 28. Sep 12 17:37:03.449590 systemd[1]: run-containerd-runc-k8s.io-7df7079d766849d3d88fdb70ad3236913c5cd0382f59f76d89624b97dfb56ab9-runc.qjNDs6.mount: Deactivated successfully.