Sep 5 00:19:28.922163 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:19:28.922188 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:19:28.922202 kernel: BIOS-provided physical RAM map: Sep 5 00:19:28.922210 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:19:28.922217 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:19:28.922225 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:19:28.922233 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:19:28.922241 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:19:28.922248 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 5 00:19:28.922256 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 5 00:19:28.922266 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 5 00:19:28.922274 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 5 00:19:28.922283 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 5 00:19:28.922290 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 5 00:19:28.922298 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 5 00:19:28.922313 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:19:28.922322 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 5 00:19:28.922329 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 5 00:19:28.922335 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:19:28.922342 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:19:28.922348 kernel: NX (Execute Disable) protection: active Sep 5 00:19:28.922354 kernel: APIC: Static calls initialized Sep 5 00:19:28.922361 kernel: efi: EFI v2.7 by EDK II Sep 5 00:19:28.922367 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 5 00:19:28.922374 kernel: SMBIOS 2.8 present. Sep 5 00:19:28.922380 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 5 00:19:28.922387 kernel: Hypervisor detected: KVM Sep 5 00:19:28.922395 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:19:28.922402 kernel: kvm-clock: using sched offset of 5283911297 cycles Sep 5 00:19:28.922409 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:19:28.922415 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:19:28.922422 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:19:28.922429 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:19:28.922436 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 5 00:19:28.922443 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 5 00:19:28.922449 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:19:28.922458 kernel: Using GB pages for direct mapping Sep 5 00:19:28.922465 kernel: Secure boot disabled Sep 5 00:19:28.922471 kernel: ACPI: Early table checksum verification disabled Sep 5 00:19:28.922478 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 5 00:19:28.922489 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:19:28.922495 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:19:28.922502 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:19:28.922512 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 5 00:19:28.922518 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:19:28.922528 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:19:28.922535 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:19:28.922542 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:19:28.922549 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:19:28.922556 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 5 00:19:28.922565 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 5 00:19:28.922572 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 5 00:19:28.922578 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 5 00:19:28.922585 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 5 00:19:28.922592 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 5 00:19:28.922599 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 5 00:19:28.922606 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 5 00:19:28.922612 kernel: No NUMA configuration found Sep 5 00:19:28.922621 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 5 00:19:28.922631 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 5 00:19:28.922638 kernel: Zone ranges: Sep 5 00:19:28.922645 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:19:28.922652 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 5 00:19:28.922658 kernel: Normal empty Sep 5 00:19:28.922665 kernel: Movable zone start for each node Sep 5 00:19:28.922672 kernel: Early memory node ranges Sep 5 00:19:28.922679 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 5 00:19:28.922686 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 5 00:19:28.922692 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 5 00:19:28.922701 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 5 00:19:28.922708 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 5 00:19:28.922715 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 5 00:19:28.922722 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 5 00:19:28.922729 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:19:28.922735 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 5 00:19:28.922742 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 5 00:19:28.922749 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:19:28.922756 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 5 00:19:28.922765 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 5 00:19:28.922784 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 5 00:19:28.922791 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:19:28.922797 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:19:28.922804 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:19:28.922811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:19:28.922818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:19:28.922825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:19:28.922832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:19:28.922841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:19:28.922848 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:19:28.922855 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:19:28.922862 kernel: TSC deadline timer available Sep 5 00:19:28.922869 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:19:28.922876 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:19:28.922883 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:19:28.922890 kernel: kvm-guest: setup PV sched yield Sep 5 00:19:28.922897 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 5 00:19:28.922906 kernel: Booting paravirtualized kernel on KVM Sep 5 00:19:28.922913 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:19:28.922920 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:19:28.922927 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:19:28.922934 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:19:28.922941 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:19:28.922947 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:19:28.922954 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:19:28.922962 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:19:28.922976 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:19:28.922983 kernel: random: crng init done Sep 5 00:19:28.922991 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:19:28.922999 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:19:28.923008 kernel: Fallback order for Node 0: 0 Sep 5 00:19:28.923016 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 5 00:19:28.923025 kernel: Policy zone: DMA32 Sep 5 00:19:28.923034 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:19:28.923043 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 166140K reserved, 0K cma-reserved) Sep 5 00:19:28.923054 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:19:28.923063 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:19:28.923071 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:19:28.923080 kernel: Dynamic Preempt: voluntary Sep 5 00:19:28.923097 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:19:28.923114 kernel: rcu: RCU event tracing is enabled. Sep 5 00:19:28.923124 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:19:28.923132 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:19:28.923139 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:19:28.923146 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:19:28.923153 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:19:28.923163 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:19:28.923170 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:19:28.923177 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:19:28.923184 kernel: Console: colour dummy device 80x25 Sep 5 00:19:28.923191 kernel: printk: console [ttyS0] enabled Sep 5 00:19:28.923200 kernel: ACPI: Core revision 20230628 Sep 5 00:19:28.923208 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:19:28.923215 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:19:28.923222 kernel: x2apic enabled Sep 5 00:19:28.923229 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:19:28.923236 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:19:28.923244 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:19:28.923251 kernel: kvm-guest: setup PV IPIs Sep 5 00:19:28.923258 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:19:28.923268 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:19:28.923275 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:19:28.923282 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:19:28.923289 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:19:28.923296 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:19:28.923311 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:19:28.923319 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:19:28.923339 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:19:28.923346 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:19:28.923357 kernel: active return thunk: retbleed_return_thunk Sep 5 00:19:28.923373 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:19:28.923396 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:19:28.923411 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:19:28.923429 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:19:28.923452 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:19:28.923474 kernel: active return thunk: srso_return_thunk Sep 5 00:19:28.923482 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:19:28.923504 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:19:28.923517 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:19:28.923524 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:19:28.923531 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:19:28.923538 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:19:28.923545 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:19:28.923552 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:19:28.923560 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:19:28.923567 kernel: landlock: Up and running. Sep 5 00:19:28.923574 kernel: SELinux: Initializing. Sep 5 00:19:28.923583 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:19:28.923590 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:19:28.923597 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:19:28.923605 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:19:28.923612 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:19:28.923619 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:19:28.923626 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:19:28.923634 kernel: ... version: 0 Sep 5 00:19:28.923643 kernel: ... bit width: 48 Sep 5 00:19:28.923650 kernel: ... generic registers: 6 Sep 5 00:19:28.923657 kernel: ... value mask: 0000ffffffffffff Sep 5 00:19:28.923664 kernel: ... max period: 00007fffffffffff Sep 5 00:19:28.923671 kernel: ... fixed-purpose events: 0 Sep 5 00:19:28.923678 kernel: ... event mask: 000000000000003f Sep 5 00:19:28.923685 kernel: signal: max sigframe size: 1776 Sep 5 00:19:28.923692 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:19:28.923700 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:19:28.923707 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:19:28.923716 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:19:28.923723 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:19:28.923730 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:19:28.923737 kernel: smpboot: Max logical packages: 1 Sep 5 00:19:28.923744 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:19:28.923751 kernel: devtmpfs: initialized Sep 5 00:19:28.923758 kernel: x86/mm: Memory block size: 128MB Sep 5 00:19:28.923766 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 5 00:19:28.923799 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 5 00:19:28.923810 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 5 00:19:28.923818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 5 00:19:28.923826 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 5 00:19:28.923834 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:19:28.923842 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:19:28.923850 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:19:28.923857 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:19:28.923865 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:19:28.923873 kernel: audit: type=2000 audit(1757031567.810:1): state=initialized audit_enabled=0 res=1 Sep 5 00:19:28.923883 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:19:28.923890 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:19:28.923898 kernel: cpuidle: using governor menu Sep 5 00:19:28.923906 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:19:28.923913 kernel: dca service started, version 1.12.1 Sep 5 00:19:28.923921 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:19:28.923929 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:19:28.923936 kernel: PCI: Using configuration type 1 for base access Sep 5 00:19:28.923947 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:19:28.923954 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:19:28.923962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:19:28.923970 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:19:28.923977 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:19:28.923985 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:19:28.923992 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:19:28.924000 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:19:28.924008 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:19:28.924018 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:19:28.924025 kernel: ACPI: Interpreter enabled Sep 5 00:19:28.924033 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:19:28.924040 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:19:28.924048 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:19:28.924057 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:19:28.924067 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:19:28.924076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:19:28.924324 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:19:28.924463 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:19:28.924586 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:19:28.924595 kernel: PCI host bridge to bus 0000:00 Sep 5 00:19:28.924733 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:19:28.924964 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:19:28.925080 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:19:28.925236 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:19:28.925382 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:19:28.925495 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 5 00:19:28.925604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:19:28.925756 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:19:28.925924 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:19:28.926047 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 5 00:19:28.926172 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 5 00:19:28.926291 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 5 00:19:28.926422 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 5 00:19:28.926590 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:19:28.926745 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:19:28.926892 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 5 00:19:28.927010 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 5 00:19:28.927155 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 5 00:19:28.927326 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:19:28.927449 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 5 00:19:28.927626 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 5 00:19:28.927764 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 5 00:19:28.927922 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:19:28.928092 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 5 00:19:28.928224 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 5 00:19:28.928355 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 5 00:19:28.928476 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 5 00:19:28.928617 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:19:28.928738 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:19:28.928897 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:19:28.929024 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 5 00:19:28.929146 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 5 00:19:28.929284 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:19:28.929414 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 5 00:19:28.929424 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:19:28.929432 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:19:28.929439 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:19:28.929447 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:19:28.929458 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:19:28.929465 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:19:28.929472 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:19:28.929480 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:19:28.929487 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:19:28.929494 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:19:28.929501 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:19:28.929509 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:19:28.929516 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:19:28.929525 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:19:28.929533 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:19:28.929540 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:19:28.929547 kernel: iommu: Default domain type: Translated Sep 5 00:19:28.929554 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:19:28.929561 kernel: efivars: Registered efivars operations Sep 5 00:19:28.929568 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:19:28.929576 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:19:28.929583 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 5 00:19:28.929593 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 5 00:19:28.929600 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 5 00:19:28.929607 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 5 00:19:28.929729 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:19:28.929864 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:19:28.929985 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:19:28.929995 kernel: vgaarb: loaded Sep 5 00:19:28.930002 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:19:28.930014 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:19:28.930021 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:19:28.930028 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:19:28.930036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:19:28.930043 kernel: pnp: PnP ACPI init Sep 5 00:19:28.930200 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:19:28.930211 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:19:28.930219 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:19:28.930230 kernel: NET: Registered PF_INET protocol family Sep 5 00:19:28.930237 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:19:28.930244 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:19:28.930252 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:19:28.930259 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:19:28.930266 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:19:28.930274 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:19:28.930281 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:19:28.930288 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:19:28.930298 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:19:28.930315 kernel: NET: Registered PF_XDP protocol family Sep 5 00:19:28.930438 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 5 00:19:28.930558 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 5 00:19:28.930668 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:19:28.930791 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:19:28.930902 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:19:28.931013 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:19:28.931130 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:19:28.931245 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 5 00:19:28.931255 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:19:28.931262 kernel: Initialise system trusted keyrings Sep 5 00:19:28.931269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:19:28.931277 kernel: Key type asymmetric registered Sep 5 00:19:28.931284 kernel: Asymmetric key parser 'x509' registered Sep 5 00:19:28.931292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:19:28.931341 kernel: io scheduler mq-deadline registered Sep 5 00:19:28.931352 kernel: io scheduler kyber registered Sep 5 00:19:28.931360 kernel: io scheduler bfq registered Sep 5 00:19:28.931367 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:19:28.931375 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:19:28.931382 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:19:28.931389 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:19:28.931397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:19:28.931404 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:19:28.931412 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:19:28.931421 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:19:28.931429 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:19:28.931565 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:19:28.931577 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:19:28.931689 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:19:28.931818 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:19:28 UTC (1757031568) Sep 5 00:19:28.931944 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:19:28.931956 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:19:28.931968 kernel: efifb: probing for efifb Sep 5 00:19:28.931976 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 5 00:19:28.931983 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 5 00:19:28.931991 kernel: efifb: scrolling: redraw Sep 5 00:19:28.931998 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 5 00:19:28.932005 kernel: Console: switching to colour frame buffer device 100x37 Sep 5 00:19:28.932031 kernel: fb0: EFI VGA frame buffer device Sep 5 00:19:28.932040 kernel: pstore: Using crash dump compression: deflate Sep 5 00:19:28.932048 kernel: pstore: Registered efi_pstore as persistent store backend Sep 5 00:19:28.932058 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:19:28.932067 kernel: Segment Routing with IPv6 Sep 5 00:19:28.932076 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:19:28.932083 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:19:28.932090 kernel: Key type dns_resolver registered Sep 5 00:19:28.932098 kernel: IPI shorthand broadcast: enabled Sep 5 00:19:28.932105 kernel: sched_clock: Marking stable (1046006191, 116034973)->(1185251122, -23209958) Sep 5 00:19:28.932113 kernel: registered taskstats version 1 Sep 5 00:19:28.932120 kernel: Loading compiled-in X.509 certificates Sep 5 00:19:28.932131 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:19:28.932138 kernel: Key type .fscrypt registered Sep 5 00:19:28.932146 kernel: Key type fscrypt-provisioning registered Sep 5 00:19:28.932154 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:19:28.932161 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:19:28.932168 kernel: ima: No architecture policies found Sep 5 00:19:28.932175 kernel: clk: Disabling unused clocks Sep 5 00:19:28.932183 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:19:28.932190 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:19:28.932200 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:19:28.932208 kernel: Run /init as init process Sep 5 00:19:28.932215 kernel: with arguments: Sep 5 00:19:28.932222 kernel: /init Sep 5 00:19:28.932229 kernel: with environment: Sep 5 00:19:28.932237 kernel: HOME=/ Sep 5 00:19:28.932244 kernel: TERM=linux Sep 5 00:19:28.932251 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:19:28.932261 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:19:28.932273 systemd[1]: Detected virtualization kvm. Sep 5 00:19:28.932281 systemd[1]: Detected architecture x86-64. Sep 5 00:19:28.932289 systemd[1]: Running in initrd. Sep 5 00:19:28.932299 systemd[1]: No hostname configured, using default hostname. Sep 5 00:19:28.932319 systemd[1]: Hostname set to . Sep 5 00:19:28.932327 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:19:28.932335 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:19:28.932343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:19:28.932351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:19:28.932359 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:19:28.932367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:19:28.932378 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:19:28.932386 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:19:28.932396 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:19:28.932404 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:19:28.932412 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:19:28.932420 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:19:28.932428 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:19:28.932438 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:19:28.932447 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:19:28.932454 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:19:28.932462 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:19:28.932470 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:19:28.932478 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:19:28.932486 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:19:28.932494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:19:28.932502 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:19:28.932513 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:19:28.932520 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:19:28.932528 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:19:28.932536 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:19:28.932544 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:19:28.932552 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:19:28.932560 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:19:28.932568 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:19:28.932576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:19:28.932586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:19:28.932594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:19:28.932602 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:19:28.932610 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:19:28.932640 systemd-journald[192]: Collecting audit messages is disabled. Sep 5 00:19:28.932658 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:19:28.932666 systemd-journald[192]: Journal started Sep 5 00:19:28.932686 systemd-journald[192]: Runtime Journal (/run/log/journal/d8d8a38730684f788ab1800470ba9d49) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:19:28.925326 systemd-modules-load[194]: Inserted module 'overlay' Sep 5 00:19:28.936162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:19:28.937789 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:19:28.939321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:19:28.945157 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:19:28.946475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:19:28.957957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:19:28.959557 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 5 00:19:28.960015 kernel: Bridge firewalling registered Sep 5 00:19:28.961197 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:19:28.965693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:19:28.972907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:19:28.973575 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:19:28.975427 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:19:28.978978 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:19:28.995994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:19:28.998321 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:19:29.008633 dracut-cmdline[229]: dracut-dracut-053 Sep 5 00:19:29.012064 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:19:29.035361 systemd-resolved[233]: Positive Trust Anchors: Sep 5 00:19:29.035386 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:19:29.035430 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:19:29.038870 systemd-resolved[233]: Defaulting to hostname 'linux'. Sep 5 00:19:29.040406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:19:29.048918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:19:29.181831 kernel: SCSI subsystem initialized Sep 5 00:19:29.196836 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:19:29.219843 kernel: iscsi: registered transport (tcp) Sep 5 00:19:29.257074 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:19:29.257157 kernel: QLogic iSCSI HBA Driver Sep 5 00:19:29.377407 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:19:29.395193 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:19:29.450287 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:19:29.450437 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:19:29.451682 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:19:29.533895 kernel: raid6: avx2x4 gen() 15089 MB/s Sep 5 00:19:29.550825 kernel: raid6: avx2x2 gen() 22855 MB/s Sep 5 00:19:29.568333 kernel: raid6: avx2x1 gen() 18323 MB/s Sep 5 00:19:29.568425 kernel: raid6: using algorithm avx2x2 gen() 22855 MB/s Sep 5 00:19:29.585955 kernel: raid6: .... xor() 14839 MB/s, rmw enabled Sep 5 00:19:29.586044 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:19:29.611837 kernel: xor: automatically using best checksumming function avx Sep 5 00:19:29.809824 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:19:29.827049 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:19:29.844272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:19:29.861238 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 5 00:19:29.866100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:19:30.227950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:19:30.252662 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Sep 5 00:19:30.301177 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:19:30.590947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:19:30.683633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:19:30.692026 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:19:30.710415 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:19:30.715576 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:19:30.717192 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:19:30.717570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:19:30.729225 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:19:30.739390 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:19:30.745938 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:19:30.754841 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:19:30.769904 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:19:30.769999 kernel: GPT:9289727 != 19775487 Sep 5 00:19:30.770028 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:19:30.770052 kernel: GPT:9289727 != 19775487 Sep 5 00:19:30.770076 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:19:30.770101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:19:30.772800 kernel: libata version 3.00 loaded. Sep 5 00:19:30.783444 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:19:30.786356 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:19:30.786658 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:19:30.789106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:19:30.791496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:19:30.794090 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:19:30.794410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:19:30.794698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:19:30.795183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:19:30.807899 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:19:30.808180 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:19:30.808192 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:19:30.808353 kernel: AES CTR mode by8 optimization enabled Sep 5 00:19:30.811149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:19:30.814851 kernel: scsi host0: ahci Sep 5 00:19:30.815040 kernel: scsi host1: ahci Sep 5 00:19:30.816377 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:19:30.822227 kernel: scsi host2: ahci Sep 5 00:19:30.816505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:19:30.828030 kernel: scsi host3: ahci Sep 5 00:19:30.828401 kernel: scsi host4: ahci Sep 5 00:19:30.829881 kernel: scsi host5: ahci Sep 5 00:19:30.834096 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 5 00:19:30.834161 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 5 00:19:30.834231 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 5 00:19:30.839532 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 5 00:19:30.839591 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 5 00:19:30.839605 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 5 00:19:30.844136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:19:30.854998 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (461) Sep 5 00:19:30.855029 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (482) Sep 5 00:19:30.859848 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:19:30.868044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:19:30.877197 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:19:30.891062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:19:30.910337 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:19:30.920294 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:19:30.934982 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:19:30.975005 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:19:31.004534 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:19:31.144641 disk-uuid[559]: Primary Header is updated. Sep 5 00:19:31.144641 disk-uuid[559]: Secondary Entries is updated. Sep 5 00:19:31.144641 disk-uuid[559]: Secondary Header is updated. Sep 5 00:19:31.148831 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:19:31.148902 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:19:31.149799 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:19:31.151383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:19:31.157822 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:19:31.157888 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:19:31.157918 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:19:31.158813 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:19:31.160318 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:19:31.160347 kernel: ata3.00: applying bridge limits Sep 5 00:19:31.163526 kernel: ata3.00: configured for UDMA/100 Sep 5 00:19:31.163575 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:19:31.220800 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:19:31.221086 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:19:31.237801 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:19:32.169812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:19:32.169869 disk-uuid[569]: The operation has completed successfully. Sep 5 00:19:32.194738 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:19:32.194883 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:19:32.233973 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:19:32.237898 sh[597]: Success Sep 5 00:19:32.250802 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:19:32.287548 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:19:32.302578 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:19:32.308620 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:19:32.319369 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:19:32.319413 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:19:32.319426 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:19:32.320514 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:19:32.321354 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:19:32.328747 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:19:32.329878 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:19:32.338950 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:19:32.341719 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:19:32.353014 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:19:32.353063 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:19:32.353080 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:19:32.356806 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:19:32.369101 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:19:32.371381 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:19:32.476853 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:19:32.485993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:19:32.510501 systemd-networkd[775]: lo: Link UP Sep 5 00:19:32.510513 systemd-networkd[775]: lo: Gained carrier Sep 5 00:19:32.512234 systemd-networkd[775]: Enumeration completed Sep 5 00:19:32.512327 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:19:32.512632 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:19:32.512636 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:19:32.515141 systemd-networkd[775]: eth0: Link UP Sep 5 00:19:32.515146 systemd-networkd[775]: eth0: Gained carrier Sep 5 00:19:32.515156 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:19:32.515193 systemd[1]: Reached target network.target - Network. Sep 5 00:19:32.571855 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:19:32.694342 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:19:32.703013 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:19:32.811903 ignition[780]: Ignition 2.19.0 Sep 5 00:19:32.811918 ignition[780]: Stage: fetch-offline Sep 5 00:19:32.811976 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:19:32.811989 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:19:32.812098 ignition[780]: parsed url from cmdline: "" Sep 5 00:19:32.812102 ignition[780]: no config URL provided Sep 5 00:19:32.812108 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:19:32.812117 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:19:32.812147 ignition[780]: op(1): [started] loading QEMU firmware config module Sep 5 00:19:32.812153 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:19:32.820465 ignition[780]: op(1): [finished] loading QEMU firmware config module Sep 5 00:19:32.820486 ignition[780]: QEMU firmware config was not found. Ignoring... Sep 5 00:19:32.859804 ignition[780]: parsing config with SHA512: 81cd9f483b8f547166166d717692a7d475aee9702f24b326c0fa02882ab1a2a9a48cabaccb23ac937778b87870194054c4d7f05c07818700d82399b3fbea3800 Sep 5 00:19:32.868567 unknown[780]: fetched base config from "system" Sep 5 00:19:32.868579 unknown[780]: fetched user config from "qemu" Sep 5 00:19:32.869041 ignition[780]: fetch-offline: fetch-offline passed Sep 5 00:19:32.869112 ignition[780]: Ignition finished successfully Sep 5 00:19:32.872006 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:19:32.873645 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:19:32.886077 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:19:32.904233 ignition[789]: Ignition 2.19.0 Sep 5 00:19:32.904251 ignition[789]: Stage: kargs Sep 5 00:19:32.904494 ignition[789]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:19:32.904512 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:19:32.905709 ignition[789]: kargs: kargs passed Sep 5 00:19:32.905773 ignition[789]: Ignition finished successfully Sep 5 00:19:32.908944 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:19:32.927177 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:19:32.952497 ignition[798]: Ignition 2.19.0 Sep 5 00:19:32.952511 ignition[798]: Stage: disks Sep 5 00:19:32.952753 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:19:32.952769 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:19:32.954000 ignition[798]: disks: disks passed Sep 5 00:19:32.956699 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:19:32.954058 ignition[798]: Ignition finished successfully Sep 5 00:19:32.958054 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:19:32.959887 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:19:32.961078 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:19:32.962940 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:19:32.964027 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:19:32.983063 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:19:33.020739 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:19:33.253965 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:19:33.281882 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:19:33.386802 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:19:33.387720 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:19:33.390284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:19:33.400923 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:19:33.403158 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:19:33.406010 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:19:33.410575 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 5 00:19:33.406081 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:19:33.417866 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:19:33.417892 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:19:33.417907 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:19:33.406118 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:19:33.412145 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:19:33.422518 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:19:33.419085 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:19:33.422993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:19:33.462052 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:19:33.467630 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:19:33.472087 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:19:33.476729 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:19:33.590468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:19:33.612017 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:19:33.613975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:19:33.624262 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:19:33.626111 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:19:33.654002 systemd-networkd[775]: eth0: Gained IPv6LL Sep 5 00:19:33.678375 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:19:33.697840 ignition[932]: INFO : Ignition 2.19.0 Sep 5 00:19:33.697840 ignition[932]: INFO : Stage: mount Sep 5 00:19:33.699703 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:19:33.699703 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:19:33.699703 ignition[932]: INFO : mount: mount passed Sep 5 00:19:33.699703 ignition[932]: INFO : Ignition finished successfully Sep 5 00:19:33.702443 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:19:33.713037 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:19:34.397019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:19:34.405637 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Sep 5 00:19:34.405685 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:19:34.405697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:19:34.407181 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:19:34.409796 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:19:34.411585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:19:34.443453 ignition[958]: INFO : Ignition 2.19.0 Sep 5 00:19:34.443453 ignition[958]: INFO : Stage: files Sep 5 00:19:34.445708 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:19:34.445708 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:19:34.445708 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:19:34.449747 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:19:34.449747 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:19:34.452658 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:19:34.452658 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:19:34.452658 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:19:34.451527 unknown[958]: wrote ssh authorized keys file for user: core Sep 5 00:19:34.458665 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:19:34.458665 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 00:19:34.549178 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:19:35.841597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:19:35.844624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 00:19:36.542327 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:19:37.725510 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:19:37.725510 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:19:37.729910 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:19:37.764830 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:19:37.770029 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:19:37.771805 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:19:37.771805 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:19:37.771805 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:19:37.771805 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:19:37.771805 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:19:37.771805 ignition[958]: INFO : files: files passed Sep 5 00:19:37.771805 ignition[958]: INFO : Ignition finished successfully Sep 5 00:19:37.774206 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:19:37.785101 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:19:37.788160 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:19:37.790105 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:19:37.790225 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:19:37.799378 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:19:37.802443 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:19:37.802443 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:19:37.807036 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:19:37.805057 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:19:37.807650 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:19:37.822948 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:19:37.850935 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:19:37.851094 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:19:37.851870 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:19:37.854691 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:19:37.855236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:19:37.856210 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:19:37.878436 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:19:37.894947 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:19:37.906209 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:19:37.907623 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:19:37.910075 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:19:37.912271 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:19:37.912395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:19:37.914709 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:19:37.916570 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:19:37.918805 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:19:37.921027 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:19:37.923407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:19:37.925741 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:19:37.928116 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:19:37.930566 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:19:37.932609 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:19:37.934474 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:19:37.936236 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:19:37.936383 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:19:37.938388 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:19:37.940026 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:19:37.942334 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:19:37.942470 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:19:37.944943 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:19:37.945110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:19:37.948110 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:19:37.948230 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:19:37.950469 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:19:37.952737 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:19:37.955904 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:19:37.958707 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:19:37.961025 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:19:37.963843 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:19:37.963969 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:19:37.966144 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:19:37.966248 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:19:37.968553 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:19:37.968684 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:19:37.972011 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:19:37.972140 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:19:37.983152 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:19:37.985495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:19:37.985676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:19:37.989466 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:19:37.991231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:19:37.991491 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:19:37.993813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:19:37.994009 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:19:37.999751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:19:37.999931 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:19:38.023162 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:19:38.068900 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:19:38.069051 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:19:38.073257 ignition[1012]: INFO : Ignition 2.19.0 Sep 5 00:19:38.073257 ignition[1012]: INFO : Stage: umount Sep 5 00:19:38.075066 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:19:38.075066 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:19:38.078211 ignition[1012]: INFO : umount: umount passed Sep 5 00:19:38.079088 ignition[1012]: INFO : Ignition finished successfully Sep 5 00:19:38.081652 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:19:38.081865 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:19:38.083385 systemd[1]: Stopped target network.target - Network. Sep 5 00:19:38.087068 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:19:38.087206 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:19:38.087964 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:19:38.088032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:19:38.088519 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:19:38.088585 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:19:38.089158 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:19:38.089223 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:19:38.089582 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:19:38.089647 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:19:38.090524 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:19:38.099470 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:19:38.104160 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:19:38.104360 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:19:38.107222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:19:38.107319 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:19:38.107877 systemd-networkd[775]: eth0: DHCPv6 lease lost Sep 5 00:19:38.111942 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:19:38.112191 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:19:38.113529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:19:38.113592 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:19:38.121073 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:19:38.123089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:19:38.123159 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:19:38.127359 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:19:38.127415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:19:38.130611 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:19:38.130666 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:19:38.134220 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:19:38.148563 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:19:38.149760 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:19:38.152298 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:19:38.153484 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:19:38.157260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:19:38.157340 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:19:38.161048 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:19:38.161117 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:19:38.164449 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:19:38.164515 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:19:38.168427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:19:38.168498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:19:38.172076 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:19:38.172142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:19:38.187018 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:19:38.187516 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:19:38.187596 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:19:38.190475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:19:38.190544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:19:38.195240 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:19:38.195363 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:19:38.196537 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:19:38.201198 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:19:38.216071 systemd[1]: Switching root. Sep 5 00:19:38.251940 systemd-journald[192]: Journal stopped Sep 5 00:19:40.164889 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 5 00:19:40.164977 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:19:40.165011 kernel: SELinux: policy capability open_perms=1 Sep 5 00:19:40.165027 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:19:40.165042 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:19:40.165057 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:19:40.165073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:19:40.165087 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:19:40.165109 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:19:40.165124 kernel: audit: type=1403 audit(1757031579.216:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:19:40.165139 systemd[1]: Successfully loaded SELinux policy in 67.504ms. Sep 5 00:19:40.165180 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.670ms. Sep 5 00:19:40.165199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:19:40.165215 systemd[1]: Detected virtualization kvm. Sep 5 00:19:40.165231 systemd[1]: Detected architecture x86-64. Sep 5 00:19:40.165246 systemd[1]: Detected first boot. Sep 5 00:19:40.165262 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:19:40.165279 zram_generator::config[1057]: No configuration found. Sep 5 00:19:40.165296 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:19:40.165316 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:19:40.165332 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:19:40.165349 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:19:40.165366 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:19:40.165382 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:19:40.165404 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:19:40.165420 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:19:40.165436 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:19:40.165456 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:19:40.165473 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:19:40.165488 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:19:40.165504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:19:40.165520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:19:40.165538 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:19:40.165554 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:19:40.165571 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:19:40.165594 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:19:40.165609 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:19:40.165621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:19:40.165634 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:19:40.165646 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:19:40.165659 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:19:40.165671 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:19:40.165837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:19:40.165857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:19:40.165878 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:19:40.165895 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:19:40.165911 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:19:40.165923 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:19:40.167622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:19:40.167659 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:19:40.167672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:19:40.167686 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:19:40.167701 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:19:40.167721 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:19:40.167733 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:19:40.167746 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:40.167759 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:19:40.167788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:19:40.167804 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:19:40.167819 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:19:40.167832 systemd[1]: Reached target machines.target - Containers. Sep 5 00:19:40.167844 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:19:40.167859 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:19:40.167872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:19:40.167884 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:19:40.167897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:19:40.167909 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:19:40.167922 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:19:40.167935 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:19:40.167947 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:19:40.167963 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:19:40.167975 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:19:40.167987 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:19:40.168012 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:19:40.168025 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:19:40.168038 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:19:40.168050 kernel: loop: module loaded Sep 5 00:19:40.168064 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:19:40.168076 kernel: fuse: init (API version 7.39) Sep 5 00:19:40.168091 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:19:40.168137 systemd-journald[1120]: Collecting audit messages is disabled. Sep 5 00:19:40.168162 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:19:40.168174 systemd-journald[1120]: Journal started Sep 5 00:19:40.168196 systemd-journald[1120]: Runtime Journal (/run/log/journal/d8d8a38730684f788ab1800470ba9d49) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:19:39.908658 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:19:39.934597 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:19:39.935153 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:19:39.935568 systemd[1]: systemd-journald.service: Consumed 1.425s CPU time. Sep 5 00:19:40.173264 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:19:40.175130 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:19:40.175204 systemd[1]: Stopped verity-setup.service. Sep 5 00:19:40.178862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:40.182795 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:19:40.185552 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:19:40.187237 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:19:40.188966 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:19:40.190966 kernel: ACPI: bus type drm_connector registered Sep 5 00:19:40.191162 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:19:40.192635 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:19:40.194050 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:19:40.195483 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:19:40.197479 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:19:40.197710 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:19:40.224155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:19:40.224375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:19:40.226133 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:19:40.226366 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:19:40.228151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:19:40.228367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:19:40.230171 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:19:40.230391 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:19:40.232067 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:19:40.232292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:19:40.234009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:19:40.235693 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:19:40.253105 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:19:40.260955 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:19:40.264194 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:19:40.265792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:19:40.267512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:19:40.270704 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:19:40.272683 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:19:40.276073 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:19:40.281697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:19:40.284511 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:19:40.284617 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:19:40.287656 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:19:40.296092 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:19:40.299158 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:19:40.300348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:19:40.302213 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:19:40.304419 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:19:40.320801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:19:40.326988 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:19:40.331943 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:19:40.333606 systemd-journald[1120]: Time spent on flushing to /var/log/journal/d8d8a38730684f788ab1800470ba9d49 is 15.012ms for 996 entries. Sep 5 00:19:40.333606 systemd-journald[1120]: System Journal (/var/log/journal/d8d8a38730684f788ab1800470ba9d49) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:19:40.444344 systemd-journald[1120]: Received client request to flush runtime journal. Sep 5 00:19:40.444381 kernel: loop0: detected capacity change from 0 to 140768 Sep 5 00:19:40.444396 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:19:40.336265 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:19:40.338415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:19:40.339882 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:19:40.355925 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 00:19:40.429233 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:19:40.433759 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:19:40.439811 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:19:40.450003 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:19:40.459998 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:19:40.461874 kernel: loop1: detected capacity change from 0 to 142488 Sep 5 00:19:40.463359 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:19:40.512850 kernel: loop2: detected capacity change from 0 to 229808 Sep 5 00:19:40.521064 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:19:40.532165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:19:40.561810 kernel: loop3: detected capacity change from 0 to 140768 Sep 5 00:19:40.587784 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 5 00:19:40.587803 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 5 00:19:40.589802 kernel: loop4: detected capacity change from 0 to 142488 Sep 5 00:19:40.595569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:19:40.616806 kernel: loop5: detected capacity change from 0 to 229808 Sep 5 00:19:40.629305 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:19:40.630118 (sd-merge)[1193]: Merged extensions into '/usr'. Sep 5 00:19:40.634349 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:19:40.634372 systemd[1]: Reloading... Sep 5 00:19:40.708815 zram_generator::config[1224]: No configuration found. Sep 5 00:19:40.826610 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:19:40.849120 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:19:40.907028 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:19:40.907471 systemd[1]: Reloading finished in 272 ms. Sep 5 00:19:40.951466 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:19:40.953186 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:19:40.954945 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:19:40.974166 systemd[1]: Starting ensure-sysext.service... Sep 5 00:19:41.022405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:19:41.029142 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:19:41.029160 systemd[1]: Reloading... Sep 5 00:19:41.047461 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:19:41.047875 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:19:41.048920 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:19:41.049291 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Sep 5 00:19:41.049370 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Sep 5 00:19:41.053535 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:19:41.053549 systemd-tmpfiles[1260]: Skipping /boot Sep 5 00:19:41.064657 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:19:41.064674 systemd-tmpfiles[1260]: Skipping /boot Sep 5 00:19:41.131814 zram_generator::config[1290]: No configuration found. Sep 5 00:19:41.395114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:19:41.450970 systemd[1]: Reloading finished in 421 ms. Sep 5 00:19:41.471329 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:19:41.493513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:19:41.501525 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:19:41.504987 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:19:41.507505 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:19:41.512042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:19:41.519031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:19:41.529092 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:19:41.537710 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:41.538014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:19:41.551304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:19:41.557155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:19:41.560418 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Sep 5 00:19:41.560708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:19:41.563294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:19:41.566291 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:19:41.569853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:41.571252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:19:41.571501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:19:41.583144 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:19:41.583393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:19:41.586337 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:19:41.588563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:19:41.588801 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:19:41.597891 augenrules[1352]: No rules Sep 5 00:19:41.601610 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:19:41.607974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:19:41.611606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:41.611857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:19:41.624908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:19:41.656204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:19:41.662816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:19:41.666009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:19:41.678272 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:19:41.682122 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:19:41.684880 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:41.686280 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:19:41.689646 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:19:41.692990 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:19:41.700312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:19:41.700568 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:19:41.703523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:19:41.703808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:19:41.710153 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:19:41.710417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:19:41.729015 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:19:41.733801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1364) Sep 5 00:19:41.747218 systemd[1]: Finished ensure-sysext.service. Sep 5 00:19:41.781700 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:19:41.792936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:41.793098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:19:41.796798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 5 00:19:41.798598 systemd-resolved[1330]: Positive Trust Anchors: Sep 5 00:19:41.798623 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:19:41.798655 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:19:41.805266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:19:41.806464 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:19:41.808214 systemd-resolved[1330]: Defaulting to hostname 'linux'. Sep 5 00:19:41.813091 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:19:41.816290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:19:41.817808 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 5 00:19:41.821104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:19:41.823033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:19:41.825853 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:19:41.828938 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:19:41.828979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:19:41.829302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:19:41.830963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:19:41.831160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:19:41.832630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:19:41.832848 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:19:41.834721 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:19:41.834921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:19:41.838939 systemd-networkd[1387]: lo: Link UP Sep 5 00:19:41.838970 systemd-networkd[1387]: lo: Gained carrier Sep 5 00:19:41.839447 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:19:41.839833 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 5 00:19:41.840139 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:19:41.841308 systemd-networkd[1387]: Enumeration completed Sep 5 00:19:41.841753 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:19:41.841765 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:19:41.842624 systemd-networkd[1387]: eth0: Link UP Sep 5 00:19:41.842635 systemd-networkd[1387]: eth0: Gained carrier Sep 5 00:19:41.842647 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:19:41.843153 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:19:41.845158 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:19:41.847974 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:19:41.855934 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:19:41.856204 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:19:41.856842 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:19:41.859472 systemd[1]: Reached target network.target - Network. Sep 5 00:19:41.862003 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:19:41.876131 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:19:41.899999 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:19:41.901741 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:19:41.901862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:19:41.902669 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:19:41.946035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:19:41.958451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:19:41.958755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:19:41.971761 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:19:41.973404 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:19:41.973493 systemd-timesyncd[1411]: Initial clock synchronization to Fri 2025-09-05 00:19:42.293823 UTC. Sep 5 00:19:41.974279 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:19:41.985808 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:19:41.994167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:19:42.000254 kernel: kvm_amd: TSC scaling supported Sep 5 00:19:42.000337 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:19:42.000357 kernel: kvm_amd: Nested Paging enabled Sep 5 00:19:42.002178 kernel: kvm_amd: LBR virtualization supported Sep 5 00:19:42.002211 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:19:42.002225 kernel: kvm_amd: Virtual GIF supported Sep 5 00:19:42.028904 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:19:42.067101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:19:42.075090 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:19:42.087241 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:19:42.098454 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:19:42.131645 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:19:42.134400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:19:42.135674 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:19:42.137134 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:19:42.138489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:19:42.140128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:19:42.141445 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:19:42.142782 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:19:42.144196 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:19:42.144242 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:19:42.146973 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:19:42.149019 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:19:42.152722 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:19:42.169006 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:19:42.172153 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:19:42.174033 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:19:42.175380 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:19:42.176521 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:19:42.177531 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:19:42.177582 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:19:42.179076 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:19:42.181475 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:19:42.186004 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:19:42.192087 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:19:42.197165 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:19:42.195856 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:19:42.197644 jq[1442]: false Sep 5 00:19:42.198441 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:19:42.204991 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:19:42.210528 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:19:42.214409 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:19:42.222087 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:19:42.226047 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:19:42.226956 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:19:42.229191 extend-filesystems[1443]: Found loop3 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found loop4 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found loop5 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found sr0 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda1 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda2 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda3 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found usr Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda4 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda6 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda7 Sep 5 00:19:42.230858 extend-filesystems[1443]: Found vda9 Sep 5 00:19:42.230858 extend-filesystems[1443]: Checking size of /dev/vda9 Sep 5 00:19:42.231155 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:19:42.233349 dbus-daemon[1441]: [system] SELinux support is enabled Sep 5 00:19:42.238137 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:19:42.240179 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:19:42.247652 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:19:42.262452 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:19:42.262758 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:19:42.263230 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:19:42.263490 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:19:42.266650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:19:42.267022 jq[1455]: true Sep 5 00:19:42.269115 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:19:42.273896 extend-filesystems[1443]: Resized partition /dev/vda9 Sep 5 00:19:42.277552 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1373) Sep 5 00:19:42.284598 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:19:42.289032 jq[1466]: true Sep 5 00:19:42.299222 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:19:42.316990 update_engine[1452]: I20250905 00:19:42.316902 1452 main.cc:92] Flatcar Update Engine starting Sep 5 00:19:42.317069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:19:42.317095 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:19:42.318994 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:19:42.320892 update_engine[1452]: I20250905 00:19:42.320622 1452 update_check_scheduler.cc:74] Next update check in 5m2s Sep 5 00:19:42.319030 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:19:42.320875 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:19:42.333009 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:19:42.348048 tar[1464]: linux-amd64/LICENSE Sep 5 00:19:42.348435 tar[1464]: linux-amd64/helm Sep 5 00:19:42.348426 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:19:42.348456 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:19:42.349137 systemd-logind[1451]: New seat seat0. Sep 5 00:19:42.351027 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:19:42.379842 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:19:42.422712 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:19:42.493852 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:19:42.593898 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:19:42.593898 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:19:42.593898 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:19:42.598527 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Sep 5 00:19:42.598656 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:19:42.599030 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:19:42.631707 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:19:42.633839 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:19:42.637332 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:19:42.718055 containerd[1469]: time="2025-09-05T00:19:42.717927287Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:19:42.751102 containerd[1469]: time="2025-09-05T00:19:42.750936862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:19:42.753645 containerd[1469]: time="2025-09-05T00:19:42.753578616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:19:42.753645 containerd[1469]: time="2025-09-05T00:19:42.753614138Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:19:42.753645 containerd[1469]: time="2025-09-05T00:19:42.753641127Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:19:42.754002 containerd[1469]: time="2025-09-05T00:19:42.753966659Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:19:42.754002 containerd[1469]: time="2025-09-05T00:19:42.753995648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:19:42.754378 containerd[1469]: time="2025-09-05T00:19:42.754096110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:19:42.754378 containerd[1469]: time="2025-09-05T00:19:42.754117962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:19:42.754453 containerd[1469]: time="2025-09-05T00:19:42.754410357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:19:42.754453 containerd[1469]: time="2025-09-05T00:19:42.754435365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:19:42.754521 containerd[1469]: time="2025-09-05T00:19:42.754454455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:19:42.754521 containerd[1469]: time="2025-09-05T00:19:42.754468669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:19:42.754734 containerd[1469]: time="2025-09-05T00:19:42.754624922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:19:42.755105 containerd[1469]: time="2025-09-05T00:19:42.755076570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:19:42.755308 containerd[1469]: time="2025-09-05T00:19:42.755253372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:19:42.755308 containerd[1469]: time="2025-09-05T00:19:42.755277839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:19:42.755439 containerd[1469]: time="2025-09-05T00:19:42.755414409Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:19:42.755520 containerd[1469]: time="2025-09-05T00:19:42.755498865Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:19:42.815167 containerd[1469]: time="2025-09-05T00:19:42.815079066Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:19:42.815502 containerd[1469]: time="2025-09-05T00:19:42.815185249Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:19:42.815502 containerd[1469]: time="2025-09-05T00:19:42.815337574Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:19:42.815502 containerd[1469]: time="2025-09-05T00:19:42.815359447Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:19:42.815502 containerd[1469]: time="2025-09-05T00:19:42.815412851Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:19:42.815722 containerd[1469]: time="2025-09-05T00:19:42.815684728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:19:42.816168 containerd[1469]: time="2025-09-05T00:19:42.816048470Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:19:42.816288 containerd[1469]: time="2025-09-05T00:19:42.816214206Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:19:42.816288 containerd[1469]: time="2025-09-05T00:19:42.816250208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:19:42.816288 containerd[1469]: time="2025-09-05T00:19:42.816264786Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:19:42.816288 containerd[1469]: time="2025-09-05T00:19:42.816286680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816432 containerd[1469]: time="2025-09-05T00:19:42.816301029Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816432 containerd[1469]: time="2025-09-05T00:19:42.816329393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816432 containerd[1469]: time="2025-09-05T00:19:42.816342887Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816432 containerd[1469]: time="2025-09-05T00:19:42.816367531Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816432 containerd[1469]: time="2025-09-05T00:19:42.816387726Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816432 containerd[1469]: time="2025-09-05T00:19:42.816420425Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816432 containerd[1469]: time="2025-09-05T00:19:42.816432710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816451915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816464867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816493773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816507580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816519793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816534100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816545552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816573875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816591589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816606500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816620 containerd[1469]: time="2025-09-05T00:19:42.816624235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816656685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816675890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816698085Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816735702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816749405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816759763Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816835759Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:19:42.816937 containerd[1469]: time="2025-09-05T00:19:42.816854202Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:19:42.817143 containerd[1469]: time="2025-09-05T00:19:42.816865738Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:19:42.817143 containerd[1469]: time="2025-09-05T00:19:42.816965054Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:19:42.817143 containerd[1469]: time="2025-09-05T00:19:42.816974692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.817143 containerd[1469]: time="2025-09-05T00:19:42.816987000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:19:42.817143 containerd[1469]: time="2025-09-05T00:19:42.816997107Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:19:42.817143 containerd[1469]: time="2025-09-05T00:19:42.817006641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:19:42.817529 containerd[1469]: time="2025-09-05T00:19:42.817430176Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:19:42.817529 containerd[1469]: time="2025-09-05T00:19:42.817510986Z" level=info msg="Connect containerd service" Sep 5 00:19:42.817699 containerd[1469]: time="2025-09-05T00:19:42.817566745Z" level=info msg="using legacy CRI server" Sep 5 00:19:42.817699 containerd[1469]: time="2025-09-05T00:19:42.817576134Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:19:42.817737 containerd[1469]: time="2025-09-05T00:19:42.817679587Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.818547977Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.818675387Z" level=info msg="Start subscribing containerd event" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.818717005Z" level=info msg="Start recovering state" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.818774566Z" level=info msg="Start event monitor" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.818806943Z" level=info msg="Start snapshots syncer" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.818815696Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.818824230Z" level=info msg="Start streaming server" Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.819526051Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:19:42.820829 containerd[1469]: time="2025-09-05T00:19:42.819586290Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:19:42.823093 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:19:42.823542 containerd[1469]: time="2025-09-05T00:19:42.823514457Z" level=info msg="containerd successfully booted in 0.107747s" Sep 5 00:19:42.904101 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:19:42.930091 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:19:42.943185 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:19:42.953578 tar[1464]: linux-amd64/README.md Sep 5 00:19:42.954339 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:19:42.954595 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:19:42.958763 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:19:42.967605 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:19:42.977862 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:19:42.991293 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:19:42.994008 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:19:42.995338 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:19:43.296204 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:19:43.310177 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:54622.service - OpenSSH per-connection server daemon (10.0.0.1:54622). Sep 5 00:19:43.361350 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 54622 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:43.365025 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:43.379163 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:19:43.391258 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:19:43.394959 systemd-logind[1451]: New session 1 of user core. Sep 5 00:19:43.408312 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:19:43.423332 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:19:43.429667 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:19:43.585356 systemd[1536]: Queued start job for default target default.target. Sep 5 00:19:43.597763 systemd[1536]: Created slice app.slice - User Application Slice. Sep 5 00:19:43.597804 systemd[1536]: Reached target paths.target - Paths. Sep 5 00:19:43.597847 systemd[1536]: Reached target timers.target - Timers. Sep 5 00:19:43.599976 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:19:43.618652 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:19:43.618893 systemd[1536]: Reached target sockets.target - Sockets. Sep 5 00:19:43.618920 systemd[1536]: Reached target basic.target - Basic System. Sep 5 00:19:43.618987 systemd[1536]: Reached target default.target - Main User Target. Sep 5 00:19:43.619035 systemd[1536]: Startup finished in 177ms. Sep 5 00:19:43.619714 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:19:43.622758 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:19:43.692581 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:54632.service - OpenSSH per-connection server daemon (10.0.0.1:54632). Sep 5 00:19:43.748193 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 54632 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:43.750154 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:43.754406 systemd-logind[1451]: New session 2 of user core. Sep 5 00:19:43.763985 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:19:43.827891 sshd[1547]: pam_unix(sshd:session): session closed for user core Sep 5 00:19:43.829941 systemd-networkd[1387]: eth0: Gained IPv6LL Sep 5 00:19:43.837724 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:19:43.839844 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:54632.service: Deactivated successfully. Sep 5 00:19:43.841966 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:19:43.842676 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:19:43.844481 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:19:43.858218 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:19:43.861353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:19:43.864036 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:19:43.866794 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:54642.service - OpenSSH per-connection server daemon (10.0.0.1:54642). Sep 5 00:19:43.875599 systemd-logind[1451]: Removed session 2. Sep 5 00:19:43.894449 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:19:43.901736 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:19:43.902041 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:19:43.903598 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:19:43.905873 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 54642 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:43.907608 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:43.911980 systemd-logind[1451]: New session 3 of user core. Sep 5 00:19:43.918951 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:19:43.978598 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 5 00:19:43.983122 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:54642.service: Deactivated successfully. Sep 5 00:19:43.985477 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:19:43.986288 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:19:43.987606 systemd-logind[1451]: Removed session 3. Sep 5 00:19:44.654270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:19:44.656481 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:19:44.657917 systemd[1]: Startup finished in 1.192s (kernel) + 10.462s (initrd) + 5.508s (userspace) = 17.162s. Sep 5 00:19:44.680597 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:19:45.135306 kubelet[1582]: E0905 00:19:45.135122 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:19:45.140224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:19:45.140448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:19:45.140900 systemd[1]: kubelet.service: Consumed 1.070s CPU time. Sep 5 00:19:54.169284 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:57546.service - OpenSSH per-connection server daemon (10.0.0.1:57546). Sep 5 00:19:54.202399 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 57546 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:54.204064 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:54.208337 systemd-logind[1451]: New session 4 of user core. Sep 5 00:19:54.224962 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:19:54.281148 sshd[1595]: pam_unix(sshd:session): session closed for user core Sep 5 00:19:54.299536 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:57546.service: Deactivated successfully. Sep 5 00:19:54.301883 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:19:54.303485 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:19:54.315071 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:57560.service - OpenSSH per-connection server daemon (10.0.0.1:57560). Sep 5 00:19:54.316343 systemd-logind[1451]: Removed session 4. Sep 5 00:19:54.343816 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 57560 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:54.345372 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:54.349755 systemd-logind[1451]: New session 5 of user core. Sep 5 00:19:54.359930 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:19:54.410238 sshd[1602]: pam_unix(sshd:session): session closed for user core Sep 5 00:19:54.420330 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:57560.service: Deactivated successfully. Sep 5 00:19:54.422321 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:19:54.423978 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:19:54.425341 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:57570.service - OpenSSH per-connection server daemon (10.0.0.1:57570). Sep 5 00:19:54.426206 systemd-logind[1451]: Removed session 5. Sep 5 00:19:54.457483 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 57570 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:54.459046 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:54.463223 systemd-logind[1451]: New session 6 of user core. Sep 5 00:19:54.476917 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:19:54.535466 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 5 00:19:54.543675 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:57570.service: Deactivated successfully. Sep 5 00:19:54.545620 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:19:54.547224 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:19:54.558134 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:57578.service - OpenSSH per-connection server daemon (10.0.0.1:57578). Sep 5 00:19:54.559143 systemd-logind[1451]: Removed session 6. Sep 5 00:19:54.590116 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 57578 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:54.592122 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:54.596627 systemd-logind[1451]: New session 7 of user core. Sep 5 00:19:54.605949 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:19:54.665951 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:19:54.666317 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:19:54.685241 sudo[1619]: pam_unix(sudo:session): session closed for user root Sep 5 00:19:54.687440 sshd[1616]: pam_unix(sshd:session): session closed for user core Sep 5 00:19:54.696572 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:57578.service: Deactivated successfully. Sep 5 00:19:54.698597 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:19:54.700769 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:19:54.719258 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:57588.service - OpenSSH per-connection server daemon (10.0.0.1:57588). Sep 5 00:19:54.720402 systemd-logind[1451]: Removed session 7. Sep 5 00:19:54.748650 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 57588 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:54.750429 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:54.755019 systemd-logind[1451]: New session 8 of user core. Sep 5 00:19:54.766040 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:19:54.821816 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:19:54.822182 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:19:54.826615 sudo[1628]: pam_unix(sudo:session): session closed for user root Sep 5 00:19:54.833572 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:19:54.833929 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:19:54.854168 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:19:54.855788 auditctl[1631]: No rules Sep 5 00:19:54.857162 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:19:54.857423 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:19:54.859438 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:19:54.891647 augenrules[1649]: No rules Sep 5 00:19:54.893451 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:19:54.894907 sudo[1627]: pam_unix(sudo:session): session closed for user root Sep 5 00:19:54.896726 sshd[1624]: pam_unix(sshd:session): session closed for user core Sep 5 00:19:54.906725 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:57588.service: Deactivated successfully. Sep 5 00:19:54.908555 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:19:54.910215 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:19:54.921133 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:57602.service - OpenSSH per-connection server daemon (10.0.0.1:57602). Sep 5 00:19:54.922260 systemd-logind[1451]: Removed session 8. Sep 5 00:19:54.948133 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 57602 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:19:54.949949 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:19:54.953897 systemd-logind[1451]: New session 9 of user core. Sep 5 00:19:54.964956 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:19:55.018558 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:19:55.018941 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:19:55.390742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:19:55.502069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:19:55.573142 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:19:55.573215 (dockerd)[1681]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:19:55.769210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:19:55.775322 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:19:55.880533 kubelet[1687]: E0905 00:19:55.880460 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:19:55.887720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:19:55.887967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:19:56.312212 dockerd[1681]: time="2025-09-05T00:19:56.312136184Z" level=info msg="Starting up" Sep 5 00:19:57.438711 dockerd[1681]: time="2025-09-05T00:19:57.438631220Z" level=info msg="Loading containers: start." Sep 5 00:19:57.803821 kernel: Initializing XFRM netlink socket Sep 5 00:19:57.900566 systemd-networkd[1387]: docker0: Link UP Sep 5 00:19:57.950649 dockerd[1681]: time="2025-09-05T00:19:57.950591874Z" level=info msg="Loading containers: done." Sep 5 00:19:57.966823 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck627933510-merged.mount: Deactivated successfully. Sep 5 00:19:57.993564 dockerd[1681]: time="2025-09-05T00:19:57.993484837Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:19:57.993723 dockerd[1681]: time="2025-09-05T00:19:57.993639800Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:19:57.993865 dockerd[1681]: time="2025-09-05T00:19:57.993835407Z" level=info msg="Daemon has completed initialization" Sep 5 00:19:58.175925 dockerd[1681]: time="2025-09-05T00:19:58.175726066Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:19:58.176139 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:19:58.980753 containerd[1469]: time="2025-09-05T00:19:58.980692102Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:20:01.994094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963762747.mount: Deactivated successfully. Sep 5 00:20:06.004648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:20:06.014018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:20:06.204583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:06.211084 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:20:06.768424 kubelet[1871]: E0905 00:20:06.768353 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:20:06.774211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:20:06.774486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:20:07.333996 containerd[1469]: time="2025-09-05T00:20:07.333921600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:07.334727 containerd[1469]: time="2025-09-05T00:20:07.334672032Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 00:20:07.335995 containerd[1469]: time="2025-09-05T00:20:07.335924949Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:07.338849 containerd[1469]: time="2025-09-05T00:20:07.338799055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:07.339842 containerd[1469]: time="2025-09-05T00:20:07.339802771Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 8.359056306s" Sep 5 00:20:07.339904 containerd[1469]: time="2025-09-05T00:20:07.339848652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 00:20:07.340506 containerd[1469]: time="2025-09-05T00:20:07.340464932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:20:08.990058 containerd[1469]: time="2025-09-05T00:20:08.989951146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:08.990737 containerd[1469]: time="2025-09-05T00:20:08.990688715Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 00:20:08.992176 containerd[1469]: time="2025-09-05T00:20:08.992133297Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:08.995542 containerd[1469]: time="2025-09-05T00:20:08.995492387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:08.996412 containerd[1469]: time="2025-09-05T00:20:08.996376954Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.655871831s" Sep 5 00:20:08.996454 containerd[1469]: time="2025-09-05T00:20:08.996410740Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 00:20:08.997079 containerd[1469]: time="2025-09-05T00:20:08.997027040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:20:11.341954 containerd[1469]: time="2025-09-05T00:20:11.341867787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:11.351300 containerd[1469]: time="2025-09-05T00:20:11.351224580Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 00:20:11.372089 containerd[1469]: time="2025-09-05T00:20:11.372010667Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:11.421983 containerd[1469]: time="2025-09-05T00:20:11.421916975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:11.423024 containerd[1469]: time="2025-09-05T00:20:11.422996208Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.425928916s" Sep 5 00:20:11.423096 containerd[1469]: time="2025-09-05T00:20:11.423026079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 00:20:11.423707 containerd[1469]: time="2025-09-05T00:20:11.423587318Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:20:16.723977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259744591.mount: Deactivated successfully. Sep 5 00:20:17.004682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 00:20:17.014069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:20:17.205463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:17.210574 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:20:17.254583 kubelet[1939]: E0905 00:20:17.254507 1939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:20:17.259580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:20:17.259868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:20:17.524262 containerd[1469]: time="2025-09-05T00:20:17.523085731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:17.526835 containerd[1469]: time="2025-09-05T00:20:17.526032975Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 00:20:17.526835 containerd[1469]: time="2025-09-05T00:20:17.526161256Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:17.530161 containerd[1469]: time="2025-09-05T00:20:17.529457795Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 6.105835403s" Sep 5 00:20:17.530161 containerd[1469]: time="2025-09-05T00:20:17.529533019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 00:20:17.530161 containerd[1469]: time="2025-09-05T00:20:17.529800050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:17.530729 containerd[1469]: time="2025-09-05T00:20:17.530663414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:20:18.082196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059621297.mount: Deactivated successfully. Sep 5 00:20:19.589418 containerd[1469]: time="2025-09-05T00:20:19.589323635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:19.590261 containerd[1469]: time="2025-09-05T00:20:19.590179404Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 00:20:19.591523 containerd[1469]: time="2025-09-05T00:20:19.591438186Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:19.596000 containerd[1469]: time="2025-09-05T00:20:19.595934810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:19.597420 containerd[1469]: time="2025-09-05T00:20:19.597380851Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.066671249s" Sep 5 00:20:19.597472 containerd[1469]: time="2025-09-05T00:20:19.597424451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 00:20:19.597995 containerd[1469]: time="2025-09-05T00:20:19.597962973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:20:20.073997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468223252.mount: Deactivated successfully. Sep 5 00:20:20.080421 containerd[1469]: time="2025-09-05T00:20:20.080383609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:20.081140 containerd[1469]: time="2025-09-05T00:20:20.081081678Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:20:20.082389 containerd[1469]: time="2025-09-05T00:20:20.082358091Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:20.085174 containerd[1469]: time="2025-09-05T00:20:20.085132163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:20.086026 containerd[1469]: time="2025-09-05T00:20:20.085986499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.987697ms" Sep 5 00:20:20.086026 containerd[1469]: time="2025-09-05T00:20:20.086021275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:20:20.086527 containerd[1469]: time="2025-09-05T00:20:20.086503941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:20:20.694784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248302092.mount: Deactivated successfully. Sep 5 00:20:22.636982 containerd[1469]: time="2025-09-05T00:20:22.636899605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:22.677379 containerd[1469]: time="2025-09-05T00:20:22.677282144Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 00:20:22.744229 containerd[1469]: time="2025-09-05T00:20:22.744168580Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:22.902603 containerd[1469]: time="2025-09-05T00:20:22.902433778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:22.904419 containerd[1469]: time="2025-09-05T00:20:22.904365373Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.817829884s" Sep 5 00:20:22.904419 containerd[1469]: time="2025-09-05T00:20:22.904409920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 00:20:26.084186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:26.094060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:20:26.118018 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-9.scope)... Sep 5 00:20:26.118036 systemd[1]: Reloading... Sep 5 00:20:26.197803 zram_generator::config[2128]: No configuration found. Sep 5 00:20:26.385506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:20:26.465390 systemd[1]: Reloading finished in 346 ms. Sep 5 00:20:26.524724 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:20:26.524873 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:20:26.525171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:26.527030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:20:26.711926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:26.717325 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:20:26.763714 kubelet[2176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:20:26.763714 kubelet[2176]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:20:26.763714 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:20:26.764205 kubelet[2176]: I0905 00:20:26.763745 2176 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:20:27.147019 kubelet[2176]: I0905 00:20:27.146956 2176 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:20:27.147019 kubelet[2176]: I0905 00:20:27.146995 2176 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:20:27.147243 kubelet[2176]: I0905 00:20:27.147228 2176 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:20:27.187739 kubelet[2176]: I0905 00:20:27.187661 2176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:20:27.188645 kubelet[2176]: E0905 00:20:27.188225 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:20:27.195685 kubelet[2176]: E0905 00:20:27.195625 2176 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:20:27.195685 kubelet[2176]: I0905 00:20:27.195658 2176 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:20:27.201450 kubelet[2176]: I0905 00:20:27.201404 2176 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:20:27.201708 kubelet[2176]: I0905 00:20:27.201659 2176 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:20:27.201914 kubelet[2176]: I0905 00:20:27.201684 2176 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:20:27.201914 kubelet[2176]: I0905 00:20:27.201903 2176 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:20:27.201914 kubelet[2176]: I0905 00:20:27.201913 2176 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:20:27.202752 kubelet[2176]: I0905 00:20:27.202707 2176 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:20:27.205467 kubelet[2176]: I0905 00:20:27.205421 2176 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:20:27.205467 kubelet[2176]: I0905 00:20:27.205441 2176 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:20:27.206479 kubelet[2176]: I0905 00:20:27.206436 2176 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:20:27.206479 kubelet[2176]: I0905 00:20:27.206465 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:20:27.215838 kubelet[2176]: I0905 00:20:27.215799 2176 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:20:27.216598 kubelet[2176]: I0905 00:20:27.216298 2176 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:20:27.216598 kubelet[2176]: E0905 00:20:27.216526 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:20:27.216598 kubelet[2176]: E0905 00:20:27.216526 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:20:27.217589 kubelet[2176]: W0905 00:20:27.217551 2176 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:20:27.221065 kubelet[2176]: I0905 00:20:27.221032 2176 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:20:27.221138 kubelet[2176]: I0905 00:20:27.221087 2176 server.go:1289] "Started kubelet" Sep 5 00:20:27.221188 kubelet[2176]: I0905 00:20:27.221164 2176 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:20:27.228610 kubelet[2176]: I0905 00:20:27.227853 2176 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:20:27.228610 kubelet[2176]: I0905 00:20:27.228478 2176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:20:27.229135 kubelet[2176]: I0905 00:20:27.229096 2176 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:20:27.230867 kubelet[2176]: E0905 00:20:27.229225 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623af4afa33b84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:20:27.221056388 +0000 UTC m=+0.499388302,LastTimestamp:2025-09-05 00:20:27.221056388 +0000 UTC m=+0.499388302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:20:27.232479 kubelet[2176]: E0905 00:20:27.232412 2176 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:20:27.234857 kubelet[2176]: I0905 00:20:27.233162 2176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:20:27.234857 kubelet[2176]: I0905 00:20:27.233733 2176 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:20:27.235011 kubelet[2176]: I0905 00:20:27.234882 2176 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:20:27.235011 kubelet[2176]: E0905 00:20:27.234994 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.235359 kubelet[2176]: I0905 00:20:27.235322 2176 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:20:27.235460 kubelet[2176]: I0905 00:20:27.235433 2176 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:20:27.235881 kubelet[2176]: E0905 00:20:27.235846 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:20:27.235945 kubelet[2176]: E0905 00:20:27.235909 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Sep 5 00:20:27.237006 kubelet[2176]: I0905 00:20:27.236975 2176 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:20:27.247982 kubelet[2176]: I0905 00:20:27.247947 2176 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:20:27.247982 kubelet[2176]: I0905 00:20:27.247969 2176 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:20:27.266239 kubelet[2176]: I0905 00:20:27.266162 2176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:20:27.268103 kubelet[2176]: I0905 00:20:27.268075 2176 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:20:27.268103 kubelet[2176]: I0905 00:20:27.268089 2176 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:20:27.268103 kubelet[2176]: I0905 00:20:27.268105 2176 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:20:27.268503 kubelet[2176]: I0905 00:20:27.268463 2176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:20:27.268503 kubelet[2176]: I0905 00:20:27.268487 2176 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:20:27.269254 kubelet[2176]: I0905 00:20:27.269212 2176 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:20:27.270290 kubelet[2176]: I0905 00:20:27.270192 2176 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:20:27.270374 kubelet[2176]: E0905 00:20:27.270299 2176 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:20:27.270622 kubelet[2176]: E0905 00:20:27.270204 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:20:27.335751 kubelet[2176]: E0905 00:20:27.335649 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.371092 kubelet[2176]: E0905 00:20:27.371004 2176 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:20:27.436653 kubelet[2176]: E0905 00:20:27.436437 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.436862 kubelet[2176]: E0905 00:20:27.436816 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Sep 5 00:20:27.526905 update_engine[1452]: I20250905 00:20:27.526656 1452 update_attempter.cc:509] Updating boot flags... Sep 5 00:20:27.537297 kubelet[2176]: E0905 00:20:27.537193 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.571538 kubelet[2176]: E0905 00:20:27.571458 2176 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:20:27.638087 kubelet[2176]: E0905 00:20:27.638005 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.738834 kubelet[2176]: E0905 00:20:27.738668 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.837825 kubelet[2176]: E0905 00:20:27.837716 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Sep 5 00:20:27.839650 kubelet[2176]: E0905 00:20:27.839596 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.940319 kubelet[2176]: E0905 00:20:27.940249 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:27.972559 kubelet[2176]: E0905 00:20:27.972487 2176 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:20:28.023404 kubelet[2176]: E0905 00:20:28.023344 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:20:28.040949 kubelet[2176]: E0905 00:20:28.040894 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.141472 kubelet[2176]: E0905 00:20:28.141395 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.242436 kubelet[2176]: E0905 00:20:28.242369 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.260276 kubelet[2176]: E0905 00:20:28.260211 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:20:28.267259 kubelet[2176]: E0905 00:20:28.266960 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:20:28.343167 kubelet[2176]: E0905 00:20:28.343003 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.443714 kubelet[2176]: E0905 00:20:28.443650 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.535418 kubelet[2176]: E0905 00:20:28.535359 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:20:28.544599 kubelet[2176]: E0905 00:20:28.544563 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.638888 kubelet[2176]: E0905 00:20:28.638653 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Sep 5 00:20:28.644692 kubelet[2176]: E0905 00:20:28.644629 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.745501 kubelet[2176]: E0905 00:20:28.745451 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.756324 kubelet[2176]: I0905 00:20:28.755907 2176 policy_none.go:49] "None policy: Start" Sep 5 00:20:28.756324 kubelet[2176]: I0905 00:20:28.755949 2176 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:20:28.756324 kubelet[2176]: I0905 00:20:28.755969 2176 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:20:28.762802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2221) Sep 5 00:20:28.773887 kubelet[2176]: E0905 00:20:28.773846 2176 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:20:28.799809 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2221) Sep 5 00:20:28.805697 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:20:28.838845 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2221) Sep 5 00:20:28.841587 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:20:28.846028 kubelet[2176]: E0905 00:20:28.845999 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:20:28.859951 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:20:28.888132 kubelet[2176]: E0905 00:20:28.887928 2176 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:20:28.888268 kubelet[2176]: I0905 00:20:28.888201 2176 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:20:28.888268 kubelet[2176]: I0905 00:20:28.888214 2176 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:20:28.888500 kubelet[2176]: I0905 00:20:28.888481 2176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:20:28.889691 kubelet[2176]: E0905 00:20:28.889599 2176 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:20:28.889691 kubelet[2176]: E0905 00:20:28.889650 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:20:28.990977 kubelet[2176]: I0905 00:20:28.990908 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:20:28.991325 kubelet[2176]: E0905 00:20:28.991272 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 5 00:20:29.193628 kubelet[2176]: I0905 00:20:29.193500 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:20:29.193907 kubelet[2176]: E0905 00:20:29.193851 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 5 00:20:29.196655 kubelet[2176]: E0905 00:20:29.196635 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:20:29.596128 kubelet[2176]: I0905 00:20:29.596016 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:20:29.596458 kubelet[2176]: E0905 00:20:29.596387 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 5 00:20:29.997940 kubelet[2176]: E0905 00:20:29.997763 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:20:30.134428 kubelet[2176]: E0905 00:20:30.134366 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:20:30.239634 kubelet[2176]: E0905 00:20:30.239554 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="3.2s" Sep 5 00:20:30.337968 kubelet[2176]: E0905 00:20:30.337904 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:20:30.397747 kubelet[2176]: I0905 00:20:30.397694 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:20:30.398135 kubelet[2176]: E0905 00:20:30.398097 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 5 00:20:30.456055 kubelet[2176]: I0905 00:20:30.455998 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cff81e42815887b6cbb1535ed7993c0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cff81e42815887b6cbb1535ed7993c0\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:30.456055 kubelet[2176]: I0905 00:20:30.456051 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cff81e42815887b6cbb1535ed7993c0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cff81e42815887b6cbb1535ed7993c0\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:30.456055 kubelet[2176]: I0905 00:20:30.456076 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cff81e42815887b6cbb1535ed7993c0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6cff81e42815887b6cbb1535ed7993c0\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:30.484244 systemd[1]: Created slice kubepods-burstable-pod6cff81e42815887b6cbb1535ed7993c0.slice - libcontainer container kubepods-burstable-pod6cff81e42815887b6cbb1535ed7993c0.slice. Sep 5 00:20:30.498936 kubelet[2176]: E0905 00:20:30.498909 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:30.508844 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:20:30.510492 kubelet[2176]: E0905 00:20:30.510461 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:30.527273 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:20:30.529236 kubelet[2176]: E0905 00:20:30.529196 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:30.556627 kubelet[2176]: I0905 00:20:30.556549 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:30.556627 kubelet[2176]: I0905 00:20:30.556607 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:30.556871 kubelet[2176]: I0905 00:20:30.556675 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:20:30.556871 kubelet[2176]: I0905 00:20:30.556767 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:30.556871 kubelet[2176]: I0905 00:20:30.556830 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:30.556871 kubelet[2176]: I0905 00:20:30.556860 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:30.799479 kubelet[2176]: E0905 00:20:30.799393 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:30.800323 containerd[1469]: time="2025-09-05T00:20:30.800270210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6cff81e42815887b6cbb1535ed7993c0,Namespace:kube-system,Attempt:0,}" Sep 5 00:20:30.811576 kubelet[2176]: E0905 00:20:30.811503 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:30.812003 containerd[1469]: time="2025-09-05T00:20:30.811971165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:20:30.830272 kubelet[2176]: E0905 00:20:30.830241 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:30.830698 containerd[1469]: time="2025-09-05T00:20:30.830650666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:20:31.340900 kubelet[2176]: E0905 00:20:31.340852 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:20:31.540719 kubelet[2176]: E0905 00:20:31.540590 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623af4afa33b84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:20:27.221056388 +0000 UTC m=+0.499388302,LastTimestamp:2025-09-05 00:20:27.221056388 +0000 UTC m=+0.499388302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:20:31.545891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362385501.mount: Deactivated successfully. Sep 5 00:20:31.551426 containerd[1469]: time="2025-09-05T00:20:31.551364916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:20:31.553227 containerd[1469]: time="2025-09-05T00:20:31.553146807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:20:31.554215 containerd[1469]: time="2025-09-05T00:20:31.554175406Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:20:31.555229 containerd[1469]: time="2025-09-05T00:20:31.555198875Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:20:31.556146 containerd[1469]: time="2025-09-05T00:20:31.556114836Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:20:31.556859 containerd[1469]: time="2025-09-05T00:20:31.556820931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:20:31.557848 containerd[1469]: time="2025-09-05T00:20:31.557804231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:20:31.559448 containerd[1469]: time="2025-09-05T00:20:31.559403435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:20:31.561232 containerd[1469]: time="2025-09-05T00:20:31.561199717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 749.176867ms" Sep 5 00:20:31.561873 containerd[1469]: time="2025-09-05T00:20:31.561841690Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 761.484075ms" Sep 5 00:20:31.564803 containerd[1469]: time="2025-09-05T00:20:31.564745227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.020856ms" Sep 5 00:20:31.739990 containerd[1469]: time="2025-09-05T00:20:31.739761195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:20:31.739990 containerd[1469]: time="2025-09-05T00:20:31.739839810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:20:31.739990 containerd[1469]: time="2025-09-05T00:20:31.739855174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:31.739990 containerd[1469]: time="2025-09-05T00:20:31.739939129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:31.741331 containerd[1469]: time="2025-09-05T00:20:31.741258644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:20:31.741949 containerd[1469]: time="2025-09-05T00:20:31.741909066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:20:31.742003 containerd[1469]: time="2025-09-05T00:20:31.739609980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:20:31.742003 containerd[1469]: time="2025-09-05T00:20:31.741959689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:20:31.742003 containerd[1469]: time="2025-09-05T00:20:31.741981186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:31.742085 containerd[1469]: time="2025-09-05T00:20:31.742007204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:31.742111 containerd[1469]: time="2025-09-05T00:20:31.742071927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:31.742449 containerd[1469]: time="2025-09-05T00:20:31.742195441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:31.799164 systemd[1]: Started cri-containerd-3318a96356db37f87555b79b92780f6303ab502aba26a6ca8cdb9ceb151bf186.scope - libcontainer container 3318a96356db37f87555b79b92780f6303ab502aba26a6ca8cdb9ceb151bf186. Sep 5 00:20:31.801308 systemd[1]: Started cri-containerd-7613453f4c5202558dc53cd558cd5810ece9c5a576f928846ca2860595af4be2.scope - libcontainer container 7613453f4c5202558dc53cd558cd5810ece9c5a576f928846ca2860595af4be2. Sep 5 00:20:31.828059 systemd[1]: Started cri-containerd-06f6fbf8eaf4045f00e13351a5b652b16f50a2ab8a9b4520b9e6d947394073a0.scope - libcontainer container 06f6fbf8eaf4045f00e13351a5b652b16f50a2ab8a9b4520b9e6d947394073a0. Sep 5 00:20:31.876562 containerd[1469]: time="2025-09-05T00:20:31.876494182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6cff81e42815887b6cbb1535ed7993c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3318a96356db37f87555b79b92780f6303ab502aba26a6ca8cdb9ceb151bf186\"" Sep 5 00:20:31.880802 kubelet[2176]: E0905 00:20:31.880523 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:31.882528 containerd[1469]: time="2025-09-05T00:20:31.882460727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7613453f4c5202558dc53cd558cd5810ece9c5a576f928846ca2860595af4be2\"" Sep 5 00:20:31.883541 kubelet[2176]: E0905 00:20:31.883504 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:31.885332 containerd[1469]: time="2025-09-05T00:20:31.885283484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"06f6fbf8eaf4045f00e13351a5b652b16f50a2ab8a9b4520b9e6d947394073a0\"" Sep 5 00:20:31.886500 kubelet[2176]: E0905 00:20:31.886483 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:31.888917 containerd[1469]: time="2025-09-05T00:20:31.888884998Z" level=info msg="CreateContainer within sandbox \"3318a96356db37f87555b79b92780f6303ab502aba26a6ca8cdb9ceb151bf186\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:20:31.891839 containerd[1469]: time="2025-09-05T00:20:31.891732179Z" level=info msg="CreateContainer within sandbox \"7613453f4c5202558dc53cd558cd5810ece9c5a576f928846ca2860595af4be2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:20:31.894649 containerd[1469]: time="2025-09-05T00:20:31.894608495Z" level=info msg="CreateContainer within sandbox \"06f6fbf8eaf4045f00e13351a5b652b16f50a2ab8a9b4520b9e6d947394073a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:20:31.911692 containerd[1469]: time="2025-09-05T00:20:31.911636802Z" level=info msg="CreateContainer within sandbox \"3318a96356db37f87555b79b92780f6303ab502aba26a6ca8cdb9ceb151bf186\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b597000fbd98a0b1a110f562786834a30e2d7dcabac67f9a40f064e5f627419e\"" Sep 5 00:20:31.912391 containerd[1469]: time="2025-09-05T00:20:31.912350764Z" level=info msg="StartContainer for \"b597000fbd98a0b1a110f562786834a30e2d7dcabac67f9a40f064e5f627419e\"" Sep 5 00:20:31.916149 containerd[1469]: time="2025-09-05T00:20:31.916104997Z" level=info msg="CreateContainer within sandbox \"7613453f4c5202558dc53cd558cd5810ece9c5a576f928846ca2860595af4be2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3363a908abebd725c82f2deeccb9f448c1ff998b5fee25e5703762b0b7e984da\"" Sep 5 00:20:31.916605 containerd[1469]: time="2025-09-05T00:20:31.916576602Z" level=info msg="StartContainer for \"3363a908abebd725c82f2deeccb9f448c1ff998b5fee25e5703762b0b7e984da\"" Sep 5 00:20:31.920134 containerd[1469]: time="2025-09-05T00:20:31.919990059Z" level=info msg="CreateContainer within sandbox \"06f6fbf8eaf4045f00e13351a5b652b16f50a2ab8a9b4520b9e6d947394073a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f7188c135922780231203dda157afdf6a0a4babb3c2ea463182f4d226e31647\"" Sep 5 00:20:31.920732 containerd[1469]: time="2025-09-05T00:20:31.920706758Z" level=info msg="StartContainer for \"4f7188c135922780231203dda157afdf6a0a4babb3c2ea463182f4d226e31647\"" Sep 5 00:20:31.944943 systemd[1]: Started cri-containerd-b597000fbd98a0b1a110f562786834a30e2d7dcabac67f9a40f064e5f627419e.scope - libcontainer container b597000fbd98a0b1a110f562786834a30e2d7dcabac67f9a40f064e5f627419e. Sep 5 00:20:31.953914 systemd[1]: Started cri-containerd-3363a908abebd725c82f2deeccb9f448c1ff998b5fee25e5703762b0b7e984da.scope - libcontainer container 3363a908abebd725c82f2deeccb9f448c1ff998b5fee25e5703762b0b7e984da. Sep 5 00:20:31.955993 systemd[1]: Started cri-containerd-4f7188c135922780231203dda157afdf6a0a4babb3c2ea463182f4d226e31647.scope - libcontainer container 4f7188c135922780231203dda157afdf6a0a4babb3c2ea463182f4d226e31647. Sep 5 00:20:31.998390 containerd[1469]: time="2025-09-05T00:20:31.994481414Z" level=info msg="StartContainer for \"b597000fbd98a0b1a110f562786834a30e2d7dcabac67f9a40f064e5f627419e\" returns successfully" Sep 5 00:20:32.001007 kubelet[2176]: I0905 00:20:32.000168 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:20:32.001007 kubelet[2176]: E0905 00:20:32.000443 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Sep 5 00:20:32.014697 containerd[1469]: time="2025-09-05T00:20:32.014447938Z" level=info msg="StartContainer for \"3363a908abebd725c82f2deeccb9f448c1ff998b5fee25e5703762b0b7e984da\" returns successfully" Sep 5 00:20:32.021184 containerd[1469]: time="2025-09-05T00:20:32.021106554Z" level=info msg="StartContainer for \"4f7188c135922780231203dda157afdf6a0a4babb3c2ea463182f4d226e31647\" returns successfully" Sep 5 00:20:32.283531 kubelet[2176]: E0905 00:20:32.283489 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:32.283912 kubelet[2176]: E0905 00:20:32.283886 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:32.287796 kubelet[2176]: E0905 00:20:32.287751 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:32.287957 kubelet[2176]: E0905 00:20:32.287926 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:32.288617 kubelet[2176]: E0905 00:20:32.288502 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:32.288891 kubelet[2176]: E0905 00:20:32.288818 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:33.291889 kubelet[2176]: E0905 00:20:33.291814 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:33.293636 kubelet[2176]: E0905 00:20:33.291943 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:33.294394 kubelet[2176]: E0905 00:20:33.294317 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:20:33.294443 kubelet[2176]: E0905 00:20:33.294419 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:33.443402 kubelet[2176]: E0905 00:20:33.443361 2176 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:20:33.662333 kubelet[2176]: E0905 00:20:33.662137 2176 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 00:20:34.016564 kubelet[2176]: E0905 00:20:34.016502 2176 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 00:20:34.471829 kubelet[2176]: E0905 00:20:34.471421 2176 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 00:20:35.202357 kubelet[2176]: I0905 00:20:35.202199 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:20:35.208003 kubelet[2176]: I0905 00:20:35.207961 2176 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:20:35.208003 kubelet[2176]: E0905 00:20:35.207993 2176 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:20:35.212728 kubelet[2176]: I0905 00:20:35.212691 2176 apiserver.go:52] "Watching apiserver" Sep 5 00:20:35.236107 kubelet[2176]: I0905 00:20:35.236069 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:20:35.236498 kubelet[2176]: I0905 00:20:35.236473 2176 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:20:35.248435 kubelet[2176]: I0905 00:20:35.248389 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:35.250323 kubelet[2176]: E0905 00:20:35.248951 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:35.253261 kubelet[2176]: I0905 00:20:35.253226 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:35.253842 kubelet[2176]: E0905 00:20:35.253817 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:35.257735 kubelet[2176]: E0905 00:20:35.257708 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:35.661124 systemd[1]: Reloading requested from client PID 2478 ('systemctl') (unit session-9.scope)... Sep 5 00:20:35.661147 systemd[1]: Reloading... Sep 5 00:20:35.734816 zram_generator::config[2517]: No configuration found. Sep 5 00:20:35.858721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:20:35.952830 systemd[1]: Reloading finished in 291 ms. Sep 5 00:20:35.999394 kubelet[2176]: I0905 00:20:35.999213 2176 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:20:35.999365 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:20:36.021245 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:20:36.021572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:36.021632 systemd[1]: kubelet.service: Consumed 1.132s CPU time, 135.9M memory peak, 0B memory swap peak. Sep 5 00:20:36.032007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:20:36.207063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:36.212991 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:20:36.257011 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:20:36.257011 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:20:36.257011 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:20:36.257404 kubelet[2562]: I0905 00:20:36.257089 2562 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:20:36.265753 kubelet[2562]: I0905 00:20:36.265708 2562 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:20:36.265753 kubelet[2562]: I0905 00:20:36.265737 2562 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:20:36.266017 kubelet[2562]: I0905 00:20:36.265989 2562 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:20:36.267262 kubelet[2562]: I0905 00:20:36.267236 2562 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:20:36.269857 kubelet[2562]: I0905 00:20:36.269821 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:20:36.274091 kubelet[2562]: E0905 00:20:36.274024 2562 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:20:36.274091 kubelet[2562]: I0905 00:20:36.274080 2562 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:20:36.280723 kubelet[2562]: I0905 00:20:36.280691 2562 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:20:36.281020 kubelet[2562]: I0905 00:20:36.280980 2562 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:20:36.281166 kubelet[2562]: I0905 00:20:36.281005 2562 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:20:36.281267 kubelet[2562]: I0905 00:20:36.281169 2562 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:20:36.281267 kubelet[2562]: I0905 00:20:36.281179 2562 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:20:36.281267 kubelet[2562]: I0905 00:20:36.281226 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:20:36.281431 kubelet[2562]: I0905 00:20:36.281404 2562 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:20:36.281431 kubelet[2562]: I0905 00:20:36.281421 2562 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:20:36.281484 kubelet[2562]: I0905 00:20:36.281451 2562 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:20:36.281484 kubelet[2562]: I0905 00:20:36.281474 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:20:36.283795 kubelet[2562]: I0905 00:20:36.282686 2562 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:20:36.286311 kubelet[2562]: I0905 00:20:36.286278 2562 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:20:36.290371 kubelet[2562]: I0905 00:20:36.290343 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:20:36.290472 kubelet[2562]: I0905 00:20:36.290417 2562 server.go:1289] "Started kubelet" Sep 5 00:20:36.290599 kubelet[2562]: I0905 00:20:36.290498 2562 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:20:36.291813 kubelet[2562]: I0905 00:20:36.291766 2562 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:20:36.293730 kubelet[2562]: I0905 00:20:36.292478 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:20:36.293730 kubelet[2562]: I0905 00:20:36.292705 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:20:36.293730 kubelet[2562]: I0905 00:20:36.293202 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:20:36.295348 kubelet[2562]: I0905 00:20:36.294522 2562 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:20:36.296685 kubelet[2562]: I0905 00:20:36.296062 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:20:36.296685 kubelet[2562]: I0905 00:20:36.296180 2562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:20:36.296685 kubelet[2562]: I0905 00:20:36.296366 2562 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:20:36.296685 kubelet[2562]: I0905 00:20:36.296540 2562 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:20:36.298577 kubelet[2562]: I0905 00:20:36.298538 2562 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:20:36.300186 kubelet[2562]: E0905 00:20:36.300089 2562 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:20:36.301569 kubelet[2562]: I0905 00:20:36.301297 2562 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:20:36.318301 kubelet[2562]: I0905 00:20:36.318271 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:20:36.319594 kubelet[2562]: I0905 00:20:36.319575 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:20:36.319656 kubelet[2562]: I0905 00:20:36.319601 2562 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:20:36.319656 kubelet[2562]: I0905 00:20:36.319625 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:20:36.319656 kubelet[2562]: I0905 00:20:36.319634 2562 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:20:36.319716 kubelet[2562]: E0905 00:20:36.319680 2562 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:20:36.340413 kubelet[2562]: I0905 00:20:36.340378 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:20:36.340413 kubelet[2562]: I0905 00:20:36.340399 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:20:36.340533 kubelet[2562]: I0905 00:20:36.340438 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:20:36.340620 kubelet[2562]: I0905 00:20:36.340602 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:20:36.340655 kubelet[2562]: I0905 00:20:36.340616 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:20:36.340655 kubelet[2562]: I0905 00:20:36.340647 2562 policy_none.go:49] "None policy: Start" Sep 5 00:20:36.340701 kubelet[2562]: I0905 00:20:36.340660 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:20:36.340701 kubelet[2562]: I0905 00:20:36.340672 2562 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:20:36.340801 kubelet[2562]: I0905 00:20:36.340764 2562 state_mem.go:75] "Updated machine memory state" Sep 5 00:20:36.345591 kubelet[2562]: E0905 00:20:36.345302 2562 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:20:36.345665 kubelet[2562]: I0905 00:20:36.345638 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:20:36.345708 kubelet[2562]: I0905 00:20:36.345673 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:20:36.345966 kubelet[2562]: I0905 00:20:36.345947 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:20:36.347503 kubelet[2562]: E0905 00:20:36.347474 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:20:36.421291 kubelet[2562]: I0905 00:20:36.421236 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:36.421441 kubelet[2562]: I0905 00:20:36.421319 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:36.421441 kubelet[2562]: I0905 00:20:36.421328 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:20:36.454895 kubelet[2562]: E0905 00:20:36.454834 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:20:36.455047 kubelet[2562]: E0905 00:20:36.454850 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:36.455262 kubelet[2562]: E0905 00:20:36.455243 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:36.458223 kubelet[2562]: I0905 00:20:36.458121 2562 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:20:36.468679 kubelet[2562]: I0905 00:20:36.468629 2562 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:20:36.468884 kubelet[2562]: I0905 00:20:36.468715 2562 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:20:36.498015 kubelet[2562]: I0905 00:20:36.497972 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cff81e42815887b6cbb1535ed7993c0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cff81e42815887b6cbb1535ed7993c0\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:36.498015 kubelet[2562]: I0905 00:20:36.498012 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cff81e42815887b6cbb1535ed7993c0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cff81e42815887b6cbb1535ed7993c0\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:36.498165 kubelet[2562]: I0905 00:20:36.498038 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cff81e42815887b6cbb1535ed7993c0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6cff81e42815887b6cbb1535ed7993c0\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:36.498165 kubelet[2562]: I0905 00:20:36.498061 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:36.498165 kubelet[2562]: I0905 00:20:36.498079 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:36.498165 kubelet[2562]: I0905 00:20:36.498098 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:36.498165 kubelet[2562]: I0905 00:20:36.498118 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:36.498314 kubelet[2562]: I0905 00:20:36.498138 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:36.498314 kubelet[2562]: I0905 00:20:36.498161 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:20:36.755863 kubelet[2562]: E0905 00:20:36.755830 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:36.756165 kubelet[2562]: E0905 00:20:36.755879 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:36.756165 kubelet[2562]: E0905 00:20:36.756006 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:37.291772 kubelet[2562]: I0905 00:20:37.291735 2562 apiserver.go:52] "Watching apiserver" Sep 5 00:20:37.296323 kubelet[2562]: I0905 00:20:37.296289 2562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:20:37.330593 kubelet[2562]: I0905 00:20:37.330551 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:20:37.331113 kubelet[2562]: I0905 00:20:37.330888 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:37.331200 kubelet[2562]: I0905 00:20:37.331075 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:37.337905 kubelet[2562]: E0905 00:20:37.337878 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:20:37.338398 kubelet[2562]: E0905 00:20:37.338017 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:37.338398 kubelet[2562]: E0905 00:20:37.338042 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:20:37.338398 kubelet[2562]: E0905 00:20:37.338181 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:37.338398 kubelet[2562]: E0905 00:20:37.337881 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:20:37.338398 kubelet[2562]: E0905 00:20:37.338339 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:37.369808 kubelet[2562]: I0905 00:20:37.369329 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.369311096 podStartE2EDuration="2.369311096s" podCreationTimestamp="2025-09-05 00:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:20:37.369090183 +0000 UTC m=+1.150782914" watchObservedRunningTime="2025-09-05 00:20:37.369311096 +0000 UTC m=+1.151003837" Sep 5 00:20:37.399239 kubelet[2562]: I0905 00:20:37.399169 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.399146595 podStartE2EDuration="2.399146595s" podCreationTimestamp="2025-09-05 00:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:20:37.398570905 +0000 UTC m=+1.180263636" watchObservedRunningTime="2025-09-05 00:20:37.399146595 +0000 UTC m=+1.180839326" Sep 5 00:20:37.399843 kubelet[2562]: I0905 00:20:37.399673 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.399634777 podStartE2EDuration="2.399634777s" podCreationTimestamp="2025-09-05 00:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:20:37.388521663 +0000 UTC m=+1.170214404" watchObservedRunningTime="2025-09-05 00:20:37.399634777 +0000 UTC m=+1.181327508" Sep 5 00:20:38.331697 kubelet[2562]: E0905 00:20:38.331659 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:38.332365 kubelet[2562]: E0905 00:20:38.331763 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:38.332365 kubelet[2562]: E0905 00:20:38.331856 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:39.333291 kubelet[2562]: E0905 00:20:39.333247 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:40.165490 kubelet[2562]: I0905 00:20:40.165450 2562 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:20:40.165856 containerd[1469]: time="2025-09-05T00:20:40.165818402Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:20:40.166248 kubelet[2562]: I0905 00:20:40.165998 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:20:41.133804 systemd[1]: Created slice kubepods-besteffort-podf350e591_268e_4876_87b7_5544fe25ef2f.slice - libcontainer container kubepods-besteffort-podf350e591_268e_4876_87b7_5544fe25ef2f.slice. Sep 5 00:20:41.222397 kubelet[2562]: E0905 00:20:41.222361 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:41.226257 kubelet[2562]: I0905 00:20:41.226219 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f350e591-268e-4876-87b7-5544fe25ef2f-xtables-lock\") pod \"kube-proxy-fd824\" (UID: \"f350e591-268e-4876-87b7-5544fe25ef2f\") " pod="kube-system/kube-proxy-fd824" Sep 5 00:20:41.226325 kubelet[2562]: I0905 00:20:41.226262 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f350e591-268e-4876-87b7-5544fe25ef2f-lib-modules\") pod \"kube-proxy-fd824\" (UID: \"f350e591-268e-4876-87b7-5544fe25ef2f\") " pod="kube-system/kube-proxy-fd824" Sep 5 00:20:41.226325 kubelet[2562]: I0905 00:20:41.226287 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr6lh\" (UniqueName: \"kubernetes.io/projected/f350e591-268e-4876-87b7-5544fe25ef2f-kube-api-access-nr6lh\") pod \"kube-proxy-fd824\" (UID: \"f350e591-268e-4876-87b7-5544fe25ef2f\") " pod="kube-system/kube-proxy-fd824" Sep 5 00:20:41.226397 kubelet[2562]: I0905 00:20:41.226330 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f350e591-268e-4876-87b7-5544fe25ef2f-kube-proxy\") pod \"kube-proxy-fd824\" (UID: \"f350e591-268e-4876-87b7-5544fe25ef2f\") " pod="kube-system/kube-proxy-fd824" Sep 5 00:20:41.342294 systemd[1]: Created slice kubepods-besteffort-pod094c51b9_fb19_48bb_a3db_489f3392ecdf.slice - libcontainer container kubepods-besteffort-pod094c51b9_fb19_48bb_a3db_489f3392ecdf.slice. Sep 5 00:20:41.344358 kubelet[2562]: E0905 00:20:41.344296 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:41.427495 kubelet[2562]: I0905 00:20:41.427324 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/094c51b9-fb19-48bb-a3db-489f3392ecdf-var-lib-calico\") pod \"tigera-operator-755d956888-nfttv\" (UID: \"094c51b9-fb19-48bb-a3db-489f3392ecdf\") " pod="tigera-operator/tigera-operator-755d956888-nfttv" Sep 5 00:20:41.427495 kubelet[2562]: I0905 00:20:41.427378 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhm9c\" (UniqueName: \"kubernetes.io/projected/094c51b9-fb19-48bb-a3db-489f3392ecdf-kube-api-access-zhm9c\") pod \"tigera-operator-755d956888-nfttv\" (UID: \"094c51b9-fb19-48bb-a3db-489f3392ecdf\") " pod="tigera-operator/tigera-operator-755d956888-nfttv" Sep 5 00:20:41.443710 kubelet[2562]: E0905 00:20:41.443649 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:41.445702 containerd[1469]: time="2025-09-05T00:20:41.445657641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fd824,Uid:f350e591-268e-4876-87b7-5544fe25ef2f,Namespace:kube-system,Attempt:0,}" Sep 5 00:20:41.471870 containerd[1469]: time="2025-09-05T00:20:41.471162618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:20:41.471870 containerd[1469]: time="2025-09-05T00:20:41.471825717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:20:41.471870 containerd[1469]: time="2025-09-05T00:20:41.471837322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:41.472217 containerd[1469]: time="2025-09-05T00:20:41.471917110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:41.498946 systemd[1]: Started cri-containerd-149f68979b32a3c24eeb2a938c362bf28757c21d4728a4249f32063559d92e87.scope - libcontainer container 149f68979b32a3c24eeb2a938c362bf28757c21d4728a4249f32063559d92e87. Sep 5 00:20:41.656408 containerd[1469]: time="2025-09-05T00:20:41.656355723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-nfttv,Uid:094c51b9-fb19-48bb-a3db-489f3392ecdf,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:20:41.688883 containerd[1469]: time="2025-09-05T00:20:41.688580379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:20:41.688883 containerd[1469]: time="2025-09-05T00:20:41.688714811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:20:41.689208 containerd[1469]: time="2025-09-05T00:20:41.688752891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:41.689301 containerd[1469]: time="2025-09-05T00:20:41.689271338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:41.707890 systemd[1]: Started cri-containerd-e5c896d35970a3d1cd59cba92ec9b6d084c1c883bd40a49ee47da21fd9b7eb87.scope - libcontainer container e5c896d35970a3d1cd59cba92ec9b6d084c1c883bd40a49ee47da21fd9b7eb87. Sep 5 00:20:42.023091 containerd[1469]: time="2025-09-05T00:20:42.023035330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fd824,Uid:f350e591-268e-4876-87b7-5544fe25ef2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"149f68979b32a3c24eeb2a938c362bf28757c21d4728a4249f32063559d92e87\"" Sep 5 00:20:42.023963 kubelet[2562]: E0905 00:20:42.023926 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:42.043322 containerd[1469]: time="2025-09-05T00:20:42.043276994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-nfttv,Uid:094c51b9-fb19-48bb-a3db-489f3392ecdf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e5c896d35970a3d1cd59cba92ec9b6d084c1c883bd40a49ee47da21fd9b7eb87\"" Sep 5 00:20:42.045098 containerd[1469]: time="2025-09-05T00:20:42.045069367Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:20:42.067102 containerd[1469]: time="2025-09-05T00:20:42.066970456Z" level=info msg="CreateContainer within sandbox \"149f68979b32a3c24eeb2a938c362bf28757c21d4728a4249f32063559d92e87\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:20:42.194353 containerd[1469]: time="2025-09-05T00:20:42.194274703Z" level=info msg="CreateContainer within sandbox \"149f68979b32a3c24eeb2a938c362bf28757c21d4728a4249f32063559d92e87\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1547dad00010725a08428145d259764cdf0f6cc557a5d6f2bd8d12399f9ea43f\"" Sep 5 00:20:42.195351 containerd[1469]: time="2025-09-05T00:20:42.195295384Z" level=info msg="StartContainer for \"1547dad00010725a08428145d259764cdf0f6cc557a5d6f2bd8d12399f9ea43f\"" Sep 5 00:20:42.230116 systemd[1]: Started cri-containerd-1547dad00010725a08428145d259764cdf0f6cc557a5d6f2bd8d12399f9ea43f.scope - libcontainer container 1547dad00010725a08428145d259764cdf0f6cc557a5d6f2bd8d12399f9ea43f. Sep 5 00:20:42.284755 containerd[1469]: time="2025-09-05T00:20:42.284492166Z" level=info msg="StartContainer for \"1547dad00010725a08428145d259764cdf0f6cc557a5d6f2bd8d12399f9ea43f\" returns successfully" Sep 5 00:20:42.341530 kubelet[2562]: E0905 00:20:42.341475 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:42.407598 kubelet[2562]: I0905 00:20:42.407425 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fd824" podStartSLOduration=1.407397607 podStartE2EDuration="1.407397607s" podCreationTimestamp="2025-09-05 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:20:42.407014387 +0000 UTC m=+6.188707118" watchObservedRunningTime="2025-09-05 00:20:42.407397607 +0000 UTC m=+6.189090338" Sep 5 00:20:44.840244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171142704.mount: Deactivated successfully. Sep 5 00:20:45.789227 containerd[1469]: time="2025-09-05T00:20:45.789161684Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:45.789972 containerd[1469]: time="2025-09-05T00:20:45.789916664Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:20:45.791071 containerd[1469]: time="2025-09-05T00:20:45.791037300Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:45.793181 containerd[1469]: time="2025-09-05T00:20:45.793156109Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:45.793909 containerd[1469]: time="2025-09-05T00:20:45.793875155Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.748772939s" Sep 5 00:20:45.793909 containerd[1469]: time="2025-09-05T00:20:45.793903383Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:20:45.798436 containerd[1469]: time="2025-09-05T00:20:45.798401228Z" level=info msg="CreateContainer within sandbox \"e5c896d35970a3d1cd59cba92ec9b6d084c1c883bd40a49ee47da21fd9b7eb87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:20:45.810995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017820015.mount: Deactivated successfully. Sep 5 00:20:45.812620 containerd[1469]: time="2025-09-05T00:20:45.812595593Z" level=info msg="CreateContainer within sandbox \"e5c896d35970a3d1cd59cba92ec9b6d084c1c883bd40a49ee47da21fd9b7eb87\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f53b32b72786f1b3e43cea64ad78495a9c6f2285afb52fca70b4417c0035933\"" Sep 5 00:20:45.813190 containerd[1469]: time="2025-09-05T00:20:45.813153154Z" level=info msg="StartContainer for \"8f53b32b72786f1b3e43cea64ad78495a9c6f2285afb52fca70b4417c0035933\"" Sep 5 00:20:45.856162 systemd[1]: Started cri-containerd-8f53b32b72786f1b3e43cea64ad78495a9c6f2285afb52fca70b4417c0035933.scope - libcontainer container 8f53b32b72786f1b3e43cea64ad78495a9c6f2285afb52fca70b4417c0035933. Sep 5 00:20:45.886595 containerd[1469]: time="2025-09-05T00:20:45.886541027Z" level=info msg="StartContainer for \"8f53b32b72786f1b3e43cea64ad78495a9c6f2285afb52fca70b4417c0035933\" returns successfully" Sep 5 00:20:46.358808 kubelet[2562]: I0905 00:20:46.358651 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-nfttv" podStartSLOduration=1.608680216 podStartE2EDuration="5.358634915s" podCreationTimestamp="2025-09-05 00:20:41 +0000 UTC" firstStartedPulling="2025-09-05 00:20:42.044699084 +0000 UTC m=+5.826391815" lastFinishedPulling="2025-09-05 00:20:45.794653783 +0000 UTC m=+9.576346514" observedRunningTime="2025-09-05 00:20:46.358522143 +0000 UTC m=+10.140214874" watchObservedRunningTime="2025-09-05 00:20:46.358634915 +0000 UTC m=+10.140327646" Sep 5 00:20:46.993099 kubelet[2562]: E0905 00:20:46.993057 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:47.353897 kubelet[2562]: E0905 00:20:47.353676 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:51.227393 kubelet[2562]: E0905 00:20:51.227329 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:51.351827 kubelet[2562]: E0905 00:20:51.350314 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:51.364196 kubelet[2562]: E0905 00:20:51.364072 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:52.134887 sudo[1660]: pam_unix(sudo:session): session closed for user root Sep 5 00:20:52.137133 sshd[1657]: pam_unix(sshd:session): session closed for user core Sep 5 00:20:52.143338 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:57602.service: Deactivated successfully. Sep 5 00:20:52.146148 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:20:52.146475 systemd[1]: session-9.scope: Consumed 5.539s CPU time, 160.7M memory peak, 0B memory swap peak. Sep 5 00:20:52.147010 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:20:52.148529 systemd-logind[1451]: Removed session 9. Sep 5 00:20:53.908340 systemd[1]: Created slice kubepods-besteffort-pod50c95c3b_edf7_4bbc_b6c0_13fe40afdbda.slice - libcontainer container kubepods-besteffort-pod50c95c3b_edf7_4bbc_b6c0_13fe40afdbda.slice. Sep 5 00:20:53.913210 kubelet[2562]: I0905 00:20:53.913119 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp4qx\" (UniqueName: \"kubernetes.io/projected/50c95c3b-edf7-4bbc-b6c0-13fe40afdbda-kube-api-access-zp4qx\") pod \"calico-typha-5dcc9fd5fb-2x2h9\" (UID: \"50c95c3b-edf7-4bbc-b6c0-13fe40afdbda\") " pod="calico-system/calico-typha-5dcc9fd5fb-2x2h9" Sep 5 00:20:53.915323 kubelet[2562]: I0905 00:20:53.915272 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c95c3b-edf7-4bbc-b6c0-13fe40afdbda-tigera-ca-bundle\") pod \"calico-typha-5dcc9fd5fb-2x2h9\" (UID: \"50c95c3b-edf7-4bbc-b6c0-13fe40afdbda\") " pod="calico-system/calico-typha-5dcc9fd5fb-2x2h9" Sep 5 00:20:53.915594 kubelet[2562]: I0905 00:20:53.915470 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/50c95c3b-edf7-4bbc-b6c0-13fe40afdbda-typha-certs\") pod \"calico-typha-5dcc9fd5fb-2x2h9\" (UID: \"50c95c3b-edf7-4bbc-b6c0-13fe40afdbda\") " pod="calico-system/calico-typha-5dcc9fd5fb-2x2h9" Sep 5 00:20:54.016029 kubelet[2562]: I0905 00:20:54.015967 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-cni-bin-dir\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016029 kubelet[2562]: I0905 00:20:54.016030 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-cni-log-dir\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016186 kubelet[2562]: I0905 00:20:54.016058 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-flexvol-driver-host\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016186 kubelet[2562]: I0905 00:20:54.016117 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6s5v\" (UniqueName: \"kubernetes.io/projected/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-kube-api-access-k6s5v\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016186 kubelet[2562]: I0905 00:20:54.016165 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-lib-modules\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016271 kubelet[2562]: I0905 00:20:54.016191 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-node-certs\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016271 kubelet[2562]: I0905 00:20:54.016225 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-cni-net-dir\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016271 kubelet[2562]: I0905 00:20:54.016248 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-var-lib-calico\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016271 kubelet[2562]: I0905 00:20:54.016268 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-policysync\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016369 kubelet[2562]: I0905 00:20:54.016291 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-var-run-calico\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016369 kubelet[2562]: I0905 00:20:54.016309 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-xtables-lock\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.016369 kubelet[2562]: I0905 00:20:54.016333 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd-tigera-ca-bundle\") pod \"calico-node-ms7m7\" (UID: \"c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd\") " pod="calico-system/calico-node-ms7m7" Sep 5 00:20:54.037845 systemd[1]: Created slice kubepods-besteffort-podc4b0c4c0_1781_43a2_81d1_3fe3a7c32cfd.slice - libcontainer container kubepods-besteffort-podc4b0c4c0_1781_43a2_81d1_3fe3a7c32cfd.slice. Sep 5 00:20:54.123585 kubelet[2562]: E0905 00:20:54.122207 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.123585 kubelet[2562]: W0905 00:20:54.123546 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.125854 kubelet[2562]: E0905 00:20:54.124810 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.125854 kubelet[2562]: E0905 00:20:54.125438 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.125854 kubelet[2562]: W0905 00:20:54.125448 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.125854 kubelet[2562]: E0905 00:20:54.125469 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.136296 kubelet[2562]: E0905 00:20:54.136138 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:20:54.138811 kubelet[2562]: E0905 00:20:54.138365 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.138811 kubelet[2562]: W0905 00:20:54.138386 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.138811 kubelet[2562]: E0905 00:20:54.138406 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.214704 kubelet[2562]: E0905 00:20:54.214566 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.214704 kubelet[2562]: W0905 00:20:54.214596 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.214704 kubelet[2562]: E0905 00:20:54.214623 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.215002 kubelet[2562]: E0905 00:20:54.214981 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.215039 kubelet[2562]: W0905 00:20:54.215002 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.215039 kubelet[2562]: E0905 00:20:54.215014 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.215283 kubelet[2562]: E0905 00:20:54.215267 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.215283 kubelet[2562]: W0905 00:20:54.215279 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.215396 kubelet[2562]: E0905 00:20:54.215290 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.215630 kubelet[2562]: E0905 00:20:54.215620 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.215630 kubelet[2562]: W0905 00:20:54.215629 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.215701 kubelet[2562]: E0905 00:20:54.215638 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.216862 kubelet[2562]: E0905 00:20:54.216833 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.216862 kubelet[2562]: W0905 00:20:54.216848 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.216862 kubelet[2562]: E0905 00:20:54.216858 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.217088 kubelet[2562]: E0905 00:20:54.217055 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.217088 kubelet[2562]: W0905 00:20:54.217064 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.217088 kubelet[2562]: E0905 00:20:54.217072 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.217254 kubelet[2562]: E0905 00:20:54.217234 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.217254 kubelet[2562]: W0905 00:20:54.217246 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.217254 kubelet[2562]: E0905 00:20:54.217254 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.217587 kubelet[2562]: E0905 00:20:54.217559 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.217587 kubelet[2562]: W0905 00:20:54.217584 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.217674 kubelet[2562]: E0905 00:20:54.217608 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.217835 kubelet[2562]: E0905 00:20:54.217770 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:54.218923 kubelet[2562]: E0905 00:20:54.218156 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.218923 kubelet[2562]: W0905 00:20:54.218169 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.218923 kubelet[2562]: E0905 00:20:54.218287 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.218923 kubelet[2562]: I0905 00:20:54.218312 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c21203f-d0d0-4ce5-8193-33854aaca356-kubelet-dir\") pod \"csi-node-driver-ps2kf\" (UID: \"9c21203f-d0d0-4ce5-8193-33854aaca356\") " pod="calico-system/csi-node-driver-ps2kf" Sep 5 00:20:54.218923 kubelet[2562]: E0905 00:20:54.218684 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.218923 kubelet[2562]: W0905 00:20:54.218694 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.218923 kubelet[2562]: E0905 00:20:54.218703 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.218923 kubelet[2562]: I0905 00:20:54.218718 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c21203f-d0d0-4ce5-8193-33854aaca356-registration-dir\") pod \"csi-node-driver-ps2kf\" (UID: \"9c21203f-d0d0-4ce5-8193-33854aaca356\") " pod="calico-system/csi-node-driver-ps2kf" Sep 5 00:20:54.219226 containerd[1469]: time="2025-09-05T00:20:54.218263073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dcc9fd5fb-2x2h9,Uid:50c95c3b-edf7-4bbc-b6c0-13fe40afdbda,Namespace:calico-system,Attempt:0,}" Sep 5 00:20:54.219654 kubelet[2562]: E0905 00:20:54.219059 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.219654 kubelet[2562]: W0905 00:20:54.219073 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.219654 kubelet[2562]: E0905 00:20:54.219111 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.219654 kubelet[2562]: E0905 00:20:54.219331 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.219654 kubelet[2562]: W0905 00:20:54.219340 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.219654 kubelet[2562]: E0905 00:20:54.219348 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.219654 kubelet[2562]: E0905 00:20:54.219592 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.219654 kubelet[2562]: W0905 00:20:54.219601 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.219654 kubelet[2562]: E0905 00:20:54.219610 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.219946 kubelet[2562]: E0905 00:20:54.219860 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.219946 kubelet[2562]: W0905 00:20:54.219871 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.219946 kubelet[2562]: E0905 00:20:54.219883 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.220158 kubelet[2562]: E0905 00:20:54.220137 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.220158 kubelet[2562]: W0905 00:20:54.220148 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.220158 kubelet[2562]: E0905 00:20:54.220157 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.220625 kubelet[2562]: E0905 00:20:54.220604 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.220625 kubelet[2562]: W0905 00:20:54.220617 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.220625 kubelet[2562]: E0905 00:20:54.220626 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.221037 kubelet[2562]: E0905 00:20:54.221013 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.221037 kubelet[2562]: W0905 00:20:54.221028 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.221143 kubelet[2562]: E0905 00:20:54.221038 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.221303 kubelet[2562]: E0905 00:20:54.221283 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.221303 kubelet[2562]: W0905 00:20:54.221299 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.221480 kubelet[2562]: E0905 00:20:54.221349 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.222050 kubelet[2562]: E0905 00:20:54.222029 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.222050 kubelet[2562]: W0905 00:20:54.222043 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.222050 kubelet[2562]: E0905 00:20:54.222052 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.222334 kubelet[2562]: E0905 00:20:54.222303 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.222334 kubelet[2562]: W0905 00:20:54.222316 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.222334 kubelet[2562]: E0905 00:20:54.222326 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.222633 kubelet[2562]: E0905 00:20:54.222613 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.222633 kubelet[2562]: W0905 00:20:54.222626 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.222633 kubelet[2562]: E0905 00:20:54.222635 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.223116 kubelet[2562]: E0905 00:20:54.223091 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.223116 kubelet[2562]: W0905 00:20:54.223107 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.223116 kubelet[2562]: E0905 00:20:54.223117 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.223402 kubelet[2562]: E0905 00:20:54.223385 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.223402 kubelet[2562]: W0905 00:20:54.223396 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.223402 kubelet[2562]: E0905 00:20:54.223405 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.223693 kubelet[2562]: E0905 00:20:54.223673 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.223693 kubelet[2562]: W0905 00:20:54.223684 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.223693 kubelet[2562]: E0905 00:20:54.223693 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.223924 kubelet[2562]: E0905 00:20:54.223907 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.223924 kubelet[2562]: W0905 00:20:54.223917 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.223924 kubelet[2562]: E0905 00:20:54.223925 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.224126 kubelet[2562]: E0905 00:20:54.224109 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.224126 kubelet[2562]: W0905 00:20:54.224119 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.224126 kubelet[2562]: E0905 00:20:54.224127 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.246318 containerd[1469]: time="2025-09-05T00:20:54.246189477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:20:54.246318 containerd[1469]: time="2025-09-05T00:20:54.246285742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:20:54.246318 containerd[1469]: time="2025-09-05T00:20:54.246308167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:54.246737 containerd[1469]: time="2025-09-05T00:20:54.246426367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:54.268103 systemd[1]: Started cri-containerd-8592a31488ff38b8f1faa5ded3aafda264ccf459222c0b188a0833fb764c7b18.scope - libcontainer container 8592a31488ff38b8f1faa5ded3aafda264ccf459222c0b188a0833fb764c7b18. Sep 5 00:20:54.309257 containerd[1469]: time="2025-09-05T00:20:54.309217702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dcc9fd5fb-2x2h9,Uid:50c95c3b-edf7-4bbc-b6c0-13fe40afdbda,Namespace:calico-system,Attempt:0,} returns sandbox id \"8592a31488ff38b8f1faa5ded3aafda264ccf459222c0b188a0833fb764c7b18\"" Sep 5 00:20:54.310002 kubelet[2562]: E0905 00:20:54.309977 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:54.310828 containerd[1469]: time="2025-09-05T00:20:54.310763046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:20:54.319843 kubelet[2562]: E0905 00:20:54.319812 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.319843 kubelet[2562]: W0905 00:20:54.319835 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.319843 kubelet[2562]: E0905 00:20:54.319854 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.320274 kubelet[2562]: E0905 00:20:54.320243 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.320274 kubelet[2562]: W0905 00:20:54.320257 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.320274 kubelet[2562]: E0905 00:20:54.320266 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.320390 kubelet[2562]: I0905 00:20:54.320289 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c21203f-d0d0-4ce5-8193-33854aaca356-varrun\") pod \"csi-node-driver-ps2kf\" (UID: \"9c21203f-d0d0-4ce5-8193-33854aaca356\") " pod="calico-system/csi-node-driver-ps2kf" Sep 5 00:20:54.320598 kubelet[2562]: E0905 00:20:54.320578 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.320598 kubelet[2562]: W0905 00:20:54.320596 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.320665 kubelet[2562]: E0905 00:20:54.320610 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.320885 kubelet[2562]: E0905 00:20:54.320867 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.320925 kubelet[2562]: W0905 00:20:54.320883 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.320925 kubelet[2562]: E0905 00:20:54.320897 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.321170 kubelet[2562]: E0905 00:20:54.321143 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.321228 kubelet[2562]: W0905 00:20:54.321197 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.321228 kubelet[2562]: E0905 00:20:54.321219 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.321893 kubelet[2562]: E0905 00:20:54.321869 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.321893 kubelet[2562]: W0905 00:20:54.321886 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.321987 kubelet[2562]: E0905 00:20:54.321899 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.322214 kubelet[2562]: E0905 00:20:54.322195 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.322214 kubelet[2562]: W0905 00:20:54.322210 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.322294 kubelet[2562]: E0905 00:20:54.322222 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.322346 kubelet[2562]: I0905 00:20:54.322321 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4t4v\" (UniqueName: \"kubernetes.io/projected/9c21203f-d0d0-4ce5-8193-33854aaca356-kube-api-access-r4t4v\") pod \"csi-node-driver-ps2kf\" (UID: \"9c21203f-d0d0-4ce5-8193-33854aaca356\") " pod="calico-system/csi-node-driver-ps2kf" Sep 5 00:20:54.322630 kubelet[2562]: E0905 00:20:54.322597 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.322630 kubelet[2562]: W0905 00:20:54.322619 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.322697 kubelet[2562]: E0905 00:20:54.322636 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.322914 kubelet[2562]: E0905 00:20:54.322893 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.322914 kubelet[2562]: W0905 00:20:54.322903 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.322914 kubelet[2562]: E0905 00:20:54.322911 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.323146 kubelet[2562]: E0905 00:20:54.323133 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.323146 kubelet[2562]: W0905 00:20:54.323142 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.323202 kubelet[2562]: E0905 00:20:54.323150 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.323234 kubelet[2562]: I0905 00:20:54.323202 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c21203f-d0d0-4ce5-8193-33854aaca356-socket-dir\") pod \"csi-node-driver-ps2kf\" (UID: \"9c21203f-d0d0-4ce5-8193-33854aaca356\") " pod="calico-system/csi-node-driver-ps2kf" Sep 5 00:20:54.323497 kubelet[2562]: E0905 00:20:54.323457 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.323542 kubelet[2562]: W0905 00:20:54.323510 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.323542 kubelet[2562]: E0905 00:20:54.323524 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.323761 kubelet[2562]: E0905 00:20:54.323740 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.323761 kubelet[2562]: W0905 00:20:54.323752 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.323761 kubelet[2562]: E0905 00:20:54.323762 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.324031 kubelet[2562]: E0905 00:20:54.324013 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.324031 kubelet[2562]: W0905 00:20:54.324026 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.324094 kubelet[2562]: E0905 00:20:54.324037 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.324312 kubelet[2562]: E0905 00:20:54.324292 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.324312 kubelet[2562]: W0905 00:20:54.324304 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.324312 kubelet[2562]: E0905 00:20:54.324314 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.324560 kubelet[2562]: E0905 00:20:54.324543 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.324560 kubelet[2562]: W0905 00:20:54.324553 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.324560 kubelet[2562]: E0905 00:20:54.324561 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.324820 kubelet[2562]: E0905 00:20:54.324795 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.324820 kubelet[2562]: W0905 00:20:54.324805 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.324820 kubelet[2562]: E0905 00:20:54.324813 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.325041 kubelet[2562]: E0905 00:20:54.325025 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.325041 kubelet[2562]: W0905 00:20:54.325033 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.325041 kubelet[2562]: E0905 00:20:54.325041 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.325251 kubelet[2562]: E0905 00:20:54.325239 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.325251 kubelet[2562]: W0905 00:20:54.325247 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.325294 kubelet[2562]: E0905 00:20:54.325254 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.325513 kubelet[2562]: E0905 00:20:54.325497 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.325513 kubelet[2562]: W0905 00:20:54.325506 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.325513 kubelet[2562]: E0905 00:20:54.325515 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.342257 containerd[1469]: time="2025-09-05T00:20:54.342212793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ms7m7,Uid:c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd,Namespace:calico-system,Attempt:0,}" Sep 5 00:20:54.375151 containerd[1469]: time="2025-09-05T00:20:54.375037529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:20:54.375151 containerd[1469]: time="2025-09-05T00:20:54.375101559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:20:54.375151 containerd[1469]: time="2025-09-05T00:20:54.375116199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:54.375358 containerd[1469]: time="2025-09-05T00:20:54.375219357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:20:54.402244 systemd[1]: Started cri-containerd-3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b.scope - libcontainer container 3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b. Sep 5 00:20:54.424858 kubelet[2562]: E0905 00:20:54.424813 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.424858 kubelet[2562]: W0905 00:20:54.424836 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.424858 kubelet[2562]: E0905 00:20:54.424857 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.425094 kubelet[2562]: E0905 00:20:54.425079 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.425094 kubelet[2562]: W0905 00:20:54.425090 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.425220 kubelet[2562]: E0905 00:20:54.425098 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.425338 kubelet[2562]: E0905 00:20:54.425323 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.425338 kubelet[2562]: W0905 00:20:54.425333 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.425419 kubelet[2562]: E0905 00:20:54.425342 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.425576 kubelet[2562]: E0905 00:20:54.425560 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.425576 kubelet[2562]: W0905 00:20:54.425573 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.425679 kubelet[2562]: E0905 00:20:54.425583 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.425916 kubelet[2562]: E0905 00:20:54.425758 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.425916 kubelet[2562]: W0905 00:20:54.425768 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.425916 kubelet[2562]: E0905 00:20:54.425787 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.426140 kubelet[2562]: E0905 00:20:54.426110 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.426244 kubelet[2562]: W0905 00:20:54.426226 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.426460 kubelet[2562]: E0905 00:20:54.426323 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.426601 kubelet[2562]: E0905 00:20:54.426585 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.426696 kubelet[2562]: W0905 00:20:54.426680 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.426929 kubelet[2562]: E0905 00:20:54.426787 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.427054 kubelet[2562]: E0905 00:20:54.427038 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.427162 kubelet[2562]: W0905 00:20:54.427136 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.427424 kubelet[2562]: E0905 00:20:54.427319 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.428141 kubelet[2562]: E0905 00:20:54.428010 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.428141 kubelet[2562]: W0905 00:20:54.428025 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.428141 kubelet[2562]: E0905 00:20:54.428036 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.428276 kubelet[2562]: E0905 00:20:54.428255 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.428276 kubelet[2562]: W0905 00:20:54.428271 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.428357 kubelet[2562]: E0905 00:20:54.428282 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.428731 kubelet[2562]: E0905 00:20:54.428525 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.428731 kubelet[2562]: W0905 00:20:54.428537 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.428731 kubelet[2562]: E0905 00:20:54.428547 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.428731 kubelet[2562]: E0905 00:20:54.428721 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.428731 kubelet[2562]: W0905 00:20:54.428728 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.428731 kubelet[2562]: E0905 00:20:54.428736 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.429007 kubelet[2562]: E0905 00:20:54.428971 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.429007 kubelet[2562]: W0905 00:20:54.428980 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.429007 kubelet[2562]: E0905 00:20:54.428988 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.429431 kubelet[2562]: E0905 00:20:54.429411 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.429431 kubelet[2562]: W0905 00:20:54.429424 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.429431 kubelet[2562]: E0905 00:20:54.429433 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.429747 kubelet[2562]: E0905 00:20:54.429728 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.429850 kubelet[2562]: W0905 00:20:54.429752 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.429850 kubelet[2562]: E0905 00:20:54.429763 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:54.430659 containerd[1469]: time="2025-09-05T00:20:54.430454698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ms7m7,Uid:c4b0c4c0-1781-43a2-81d1-3fe3a7c32cfd,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b\"" Sep 5 00:20:54.447498 kubelet[2562]: E0905 00:20:54.447444 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:54.447498 kubelet[2562]: W0905 00:20:54.447462 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:54.447498 kubelet[2562]: E0905 00:20:54.447479 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:56.321069 kubelet[2562]: E0905 00:20:56.320998 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:20:56.771431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956105063.mount: Deactivated successfully. Sep 5 00:20:58.240179 containerd[1469]: time="2025-09-05T00:20:58.240094384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:58.243910 containerd[1469]: time="2025-09-05T00:20:58.243837913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:20:58.245346 containerd[1469]: time="2025-09-05T00:20:58.245296663Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:58.247583 containerd[1469]: time="2025-09-05T00:20:58.247542945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:20:58.248451 containerd[1469]: time="2025-09-05T00:20:58.248394365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.937584895s" Sep 5 00:20:58.248451 containerd[1469]: time="2025-09-05T00:20:58.248447582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:20:58.249761 containerd[1469]: time="2025-09-05T00:20:58.249581229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:20:58.320309 kubelet[2562]: E0905 00:20:58.320264 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:20:58.347140 containerd[1469]: time="2025-09-05T00:20:58.347094537Z" level=info msg="CreateContainer within sandbox \"8592a31488ff38b8f1faa5ded3aafda264ccf459222c0b188a0833fb764c7b18\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:20:58.455350 containerd[1469]: time="2025-09-05T00:20:58.455287896Z" level=info msg="CreateContainer within sandbox \"8592a31488ff38b8f1faa5ded3aafda264ccf459222c0b188a0833fb764c7b18\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2cf610f93bd8349eb493e2b8e949d3a2aa5be406c7f6a3ebc90330213ffe38e9\"" Sep 5 00:20:58.455856 containerd[1469]: time="2025-09-05T00:20:58.455823903Z" level=info msg="StartContainer for \"2cf610f93bd8349eb493e2b8e949d3a2aa5be406c7f6a3ebc90330213ffe38e9\"" Sep 5 00:20:58.492021 systemd[1]: Started cri-containerd-2cf610f93bd8349eb493e2b8e949d3a2aa5be406c7f6a3ebc90330213ffe38e9.scope - libcontainer container 2cf610f93bd8349eb493e2b8e949d3a2aa5be406c7f6a3ebc90330213ffe38e9. Sep 5 00:20:58.539064 containerd[1469]: time="2025-09-05T00:20:58.538995416Z" level=info msg="StartContainer for \"2cf610f93bd8349eb493e2b8e949d3a2aa5be406c7f6a3ebc90330213ffe38e9\" returns successfully" Sep 5 00:20:59.383912 kubelet[2562]: E0905 00:20:59.381817 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:20:59.458809 kubelet[2562]: E0905 00:20:59.457184 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.458809 kubelet[2562]: W0905 00:20:59.457220 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.458809 kubelet[2562]: E0905 00:20:59.457249 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.458809 kubelet[2562]: E0905 00:20:59.457550 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.458809 kubelet[2562]: W0905 00:20:59.457563 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.458809 kubelet[2562]: E0905 00:20:59.457576 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.458809 kubelet[2562]: E0905 00:20:59.457856 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.458809 kubelet[2562]: W0905 00:20:59.457872 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.458809 kubelet[2562]: E0905 00:20:59.457884 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.458809 kubelet[2562]: E0905 00:20:59.458208 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.459264 kubelet[2562]: W0905 00:20:59.458219 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.459264 kubelet[2562]: E0905 00:20:59.458233 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.459264 kubelet[2562]: E0905 00:20:59.458502 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.459264 kubelet[2562]: W0905 00:20:59.458512 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.459264 kubelet[2562]: E0905 00:20:59.458644 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.459264 kubelet[2562]: E0905 00:20:59.458929 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.459264 kubelet[2562]: W0905 00:20:59.458942 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.459264 kubelet[2562]: E0905 00:20:59.458953 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.459264 kubelet[2562]: E0905 00:20:59.459220 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.459264 kubelet[2562]: W0905 00:20:59.459231 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.459510 kubelet[2562]: E0905 00:20:59.459243 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.459510 kubelet[2562]: E0905 00:20:59.459487 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.459510 kubelet[2562]: W0905 00:20:59.459500 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.459578 kubelet[2562]: E0905 00:20:59.459512 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.459891 kubelet[2562]: E0905 00:20:59.459862 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.459891 kubelet[2562]: W0905 00:20:59.459885 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.459967 kubelet[2562]: E0905 00:20:59.459898 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.460228 kubelet[2562]: E0905 00:20:59.460201 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.460228 kubelet[2562]: W0905 00:20:59.460221 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.460285 kubelet[2562]: E0905 00:20:59.460234 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.460554 kubelet[2562]: E0905 00:20:59.460528 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.460554 kubelet[2562]: W0905 00:20:59.460548 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.460615 kubelet[2562]: E0905 00:20:59.460561 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.460920 kubelet[2562]: E0905 00:20:59.460881 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.460920 kubelet[2562]: W0905 00:20:59.460915 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.461002 kubelet[2562]: E0905 00:20:59.460928 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.461272 kubelet[2562]: E0905 00:20:59.461234 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.461272 kubelet[2562]: W0905 00:20:59.461267 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.461335 kubelet[2562]: E0905 00:20:59.461281 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.462801 kubelet[2562]: E0905 00:20:59.461573 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.462801 kubelet[2562]: W0905 00:20:59.461591 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.462801 kubelet[2562]: E0905 00:20:59.461603 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.462801 kubelet[2562]: E0905 00:20:59.462021 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.462801 kubelet[2562]: W0905 00:20:59.462033 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.462801 kubelet[2562]: E0905 00:20:59.462056 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.462801 kubelet[2562]: E0905 00:20:59.462540 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.462801 kubelet[2562]: W0905 00:20:59.462553 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.462801 kubelet[2562]: E0905 00:20:59.462566 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.463009 kubelet[2562]: E0905 00:20:59.462968 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.463009 kubelet[2562]: W0905 00:20:59.462996 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.463065 kubelet[2562]: E0905 00:20:59.463012 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.463403 kubelet[2562]: E0905 00:20:59.463375 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.463403 kubelet[2562]: W0905 00:20:59.463397 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.463457 kubelet[2562]: E0905 00:20:59.463413 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.463810 kubelet[2562]: E0905 00:20:59.463763 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.463935 kubelet[2562]: W0905 00:20:59.463906 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.463935 kubelet[2562]: E0905 00:20:59.463929 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.464353 kubelet[2562]: E0905 00:20:59.464324 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.464353 kubelet[2562]: W0905 00:20:59.464344 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.464436 kubelet[2562]: E0905 00:20:59.464359 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.464729 kubelet[2562]: E0905 00:20:59.464697 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.464729 kubelet[2562]: W0905 00:20:59.464718 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.464729 kubelet[2562]: E0905 00:20:59.464731 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.465094 kubelet[2562]: E0905 00:20:59.465064 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.465094 kubelet[2562]: W0905 00:20:59.465086 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.465191 kubelet[2562]: E0905 00:20:59.465097 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.465469 kubelet[2562]: E0905 00:20:59.465438 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.465469 kubelet[2562]: W0905 00:20:59.465459 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.465655 kubelet[2562]: E0905 00:20:59.465471 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.466803 kubelet[2562]: E0905 00:20:59.465855 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.466803 kubelet[2562]: W0905 00:20:59.465873 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.466803 kubelet[2562]: E0905 00:20:59.465885 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.466803 kubelet[2562]: E0905 00:20:59.466431 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.466803 kubelet[2562]: W0905 00:20:59.466443 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.466803 kubelet[2562]: E0905 00:20:59.466457 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.466972 kubelet[2562]: E0905 00:20:59.466859 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.466972 kubelet[2562]: W0905 00:20:59.466872 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.466972 kubelet[2562]: E0905 00:20:59.466886 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.467212 kubelet[2562]: E0905 00:20:59.467182 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.467212 kubelet[2562]: W0905 00:20:59.467205 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.467287 kubelet[2562]: E0905 00:20:59.467231 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.467577 kubelet[2562]: E0905 00:20:59.467546 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.467577 kubelet[2562]: W0905 00:20:59.467570 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.467638 kubelet[2562]: E0905 00:20:59.467582 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.468061 kubelet[2562]: E0905 00:20:59.468021 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.468095 kubelet[2562]: W0905 00:20:59.468067 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.468095 kubelet[2562]: E0905 00:20:59.468081 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.468801 kubelet[2562]: E0905 00:20:59.468739 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.468801 kubelet[2562]: W0905 00:20:59.468765 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.468864 kubelet[2562]: E0905 00:20:59.468836 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.469212 kubelet[2562]: E0905 00:20:59.469185 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.469212 kubelet[2562]: W0905 00:20:59.469206 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.469285 kubelet[2562]: E0905 00:20:59.469220 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.470105 kubelet[2562]: E0905 00:20:59.470076 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.470105 kubelet[2562]: W0905 00:20:59.470099 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.470191 kubelet[2562]: E0905 00:20:59.470112 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:20:59.472799 kubelet[2562]: E0905 00:20:59.470456 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:20:59.472799 kubelet[2562]: W0905 00:20:59.470475 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:20:59.472799 kubelet[2562]: E0905 00:20:59.470486 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:21:00.164961 containerd[1469]: time="2025-09-05T00:21:00.164899983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:00.165644 containerd[1469]: time="2025-09-05T00:21:00.165590788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:21:00.166720 containerd[1469]: time="2025-09-05T00:21:00.166677093Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:00.168935 containerd[1469]: time="2025-09-05T00:21:00.168901629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:00.169500 containerd[1469]: time="2025-09-05T00:21:00.169459267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.919850373s" Sep 5 00:21:00.169545 containerd[1469]: time="2025-09-05T00:21:00.169497764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:21:00.174768 containerd[1469]: time="2025-09-05T00:21:00.174718632Z" level=info msg="CreateContainer within sandbox \"3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:21:00.191447 containerd[1469]: time="2025-09-05T00:21:00.191397753Z" level=info msg="CreateContainer within sandbox \"3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74\"" Sep 5 00:21:00.193915 containerd[1469]: time="2025-09-05T00:21:00.193532199Z" level=info msg="StartContainer for \"08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74\"" Sep 5 00:21:00.237040 systemd[1]: Started cri-containerd-08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74.scope - libcontainer container 08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74. Sep 5 00:21:00.275951 containerd[1469]: time="2025-09-05T00:21:00.275895467Z" level=info msg="StartContainer for \"08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74\" returns successfully" Sep 5 00:21:00.289245 systemd[1]: cri-containerd-08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74.scope: Deactivated successfully. Sep 5 00:21:00.320839 kubelet[2562]: E0905 00:21:00.320764 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:21:00.384569 kubelet[2562]: I0905 00:21:00.384525 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:21:00.586412 kubelet[2562]: E0905 00:21:00.385026 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:00.586412 kubelet[2562]: I0905 00:21:00.580207 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dcc9fd5fb-2x2h9" podStartSLOduration=3.641399281 podStartE2EDuration="7.58018829s" podCreationTimestamp="2025-09-05 00:20:53 +0000 UTC" firstStartedPulling="2025-09-05 00:20:54.310464933 +0000 UTC m=+18.092157664" lastFinishedPulling="2025-09-05 00:20:58.249253942 +0000 UTC m=+22.030946673" observedRunningTime="2025-09-05 00:20:59.395339238 +0000 UTC m=+23.177031969" watchObservedRunningTime="2025-09-05 00:21:00.58018829 +0000 UTC m=+24.361881021" Sep 5 00:21:00.681203 containerd[1469]: time="2025-09-05T00:21:00.678196478Z" level=info msg="shim disconnected" id=08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74 namespace=k8s.io Sep 5 00:21:00.681203 containerd[1469]: time="2025-09-05T00:21:00.681200666Z" level=warning msg="cleaning up after shim disconnected" id=08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74 namespace=k8s.io Sep 5 00:21:00.681203 containerd[1469]: time="2025-09-05T00:21:00.681215596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:21:01.187721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08695cbc0e1da749bc2d65ddb97cbae8251bce7be2a4715df73c4838f1fd3b74-rootfs.mount: Deactivated successfully. Sep 5 00:21:01.389097 containerd[1469]: time="2025-09-05T00:21:01.389053237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:21:02.321289 kubelet[2562]: E0905 00:21:02.321183 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:21:04.320875 kubelet[2562]: E0905 00:21:04.320820 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:21:05.476693 containerd[1469]: time="2025-09-05T00:21:05.476612294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:05.477519 containerd[1469]: time="2025-09-05T00:21:05.477457168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:21:05.478648 containerd[1469]: time="2025-09-05T00:21:05.478605415Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:05.480960 containerd[1469]: time="2025-09-05T00:21:05.480910218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:05.481705 containerd[1469]: time="2025-09-05T00:21:05.481655172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.092558687s" Sep 5 00:21:05.481705 containerd[1469]: time="2025-09-05T00:21:05.481688118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:21:05.499443 containerd[1469]: time="2025-09-05T00:21:05.499377203Z" level=info msg="CreateContainer within sandbox \"3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:21:05.517551 containerd[1469]: time="2025-09-05T00:21:05.517488850Z" level=info msg="CreateContainer within sandbox \"3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a\"" Sep 5 00:21:05.518143 containerd[1469]: time="2025-09-05T00:21:05.518083995Z" level=info msg="StartContainer for \"b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a\"" Sep 5 00:21:05.556044 systemd[1]: Started cri-containerd-b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a.scope - libcontainer container b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a. Sep 5 00:21:05.590242 containerd[1469]: time="2025-09-05T00:21:05.590135034Z" level=info msg="StartContainer for \"b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a\" returns successfully" Sep 5 00:21:06.322079 kubelet[2562]: E0905 00:21:06.321798 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:21:07.024626 systemd[1]: cri-containerd-b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a.scope: Deactivated successfully. Sep 5 00:21:07.039184 kubelet[2562]: I0905 00:21:07.039145 2562 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:21:07.053217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a-rootfs.mount: Deactivated successfully. Sep 5 00:21:07.676935 containerd[1469]: time="2025-09-05T00:21:07.676804464Z" level=info msg="shim disconnected" id=b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a namespace=k8s.io Sep 5 00:21:07.676935 containerd[1469]: time="2025-09-05T00:21:07.676899623Z" level=warning msg="cleaning up after shim disconnected" id=b1ca3214b4871d198cbcb46e661c732b4dfa32e5ffbe231a70e4b60fea58b80a namespace=k8s.io Sep 5 00:21:07.676935 containerd[1469]: time="2025-09-05T00:21:07.676911757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:21:07.688680 systemd[1]: Created slice kubepods-burstable-pod4cebde53_a50f_40b3_8246_ec9a25623456.slice - libcontainer container kubepods-burstable-pod4cebde53_a50f_40b3_8246_ec9a25623456.slice. Sep 5 00:21:07.699882 systemd[1]: Created slice kubepods-besteffort-pod9c21203f_d0d0_4ce5_8193_33854aaca356.slice - libcontainer container kubepods-besteffort-pod9c21203f_d0d0_4ce5_8193_33854aaca356.slice. Sep 5 00:21:07.704504 containerd[1469]: time="2025-09-05T00:21:07.704449130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ps2kf,Uid:9c21203f-d0d0-4ce5-8193-33854aaca356,Namespace:calico-system,Attempt:0,}" Sep 5 00:21:07.711020 systemd[1]: Created slice kubepods-burstable-pode8d3e7fa_d9cb_474c_a80c_88c1ff6063e3.slice - libcontainer container kubepods-burstable-pode8d3e7fa_d9cb_474c_a80c_88c1ff6063e3.slice. Sep 5 00:21:07.747375 systemd[1]: Created slice kubepods-besteffort-poda245f3c0_169f_40da_8a24_943433622035.slice - libcontainer container kubepods-besteffort-poda245f3c0_169f_40da_8a24_943433622035.slice. Sep 5 00:21:07.755404 systemd[1]: Created slice kubepods-besteffort-pod6875358a_dc29_4093_b0ab_9707efb59b86.slice - libcontainer container kubepods-besteffort-pod6875358a_dc29_4093_b0ab_9707efb59b86.slice. Sep 5 00:21:07.761345 systemd[1]: Created slice kubepods-besteffort-pod4a619295_d161_4fe3_866f_7232abcadd6e.slice - libcontainer container kubepods-besteffort-pod4a619295_d161_4fe3_866f_7232abcadd6e.slice. Sep 5 00:21:07.766329 systemd[1]: Created slice kubepods-besteffort-pode434d84b_5a4e_4e9f_be35_307612a171bb.slice - libcontainer container kubepods-besteffort-pode434d84b_5a4e_4e9f_be35_307612a171bb.slice. Sep 5 00:21:07.771929 systemd[1]: Created slice kubepods-besteffort-podecf2b887_548a_428c_a0ae_a36f75934ba9.slice - libcontainer container kubepods-besteffort-podecf2b887_548a_428c_a0ae_a36f75934ba9.slice. Sep 5 00:21:07.829852 containerd[1469]: time="2025-09-05T00:21:07.829765713Z" level=error msg="Failed to destroy network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:07.830299 containerd[1469]: time="2025-09-05T00:21:07.830258874Z" level=error msg="encountered an error cleaning up failed sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:07.830358 containerd[1469]: time="2025-09-05T00:21:07.830324885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ps2kf,Uid:9c21203f-d0d0-4ce5-8193-33854aaca356,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:07.830629 kubelet[2562]: E0905 00:21:07.830579 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:07.831415 kubelet[2562]: E0905 00:21:07.830657 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ps2kf" Sep 5 00:21:07.831415 kubelet[2562]: E0905 00:21:07.830687 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ps2kf" Sep 5 00:21:07.831415 kubelet[2562]: E0905 00:21:07.830755 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ps2kf_calico-system(9c21203f-d0d0-4ce5-8193-33854aaca356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ps2kf_calico-system(9c21203f-d0d0-4ce5-8193-33854aaca356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:21:07.832163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da-shm.mount: Deactivated successfully. Sep 5 00:21:07.838957 kubelet[2562]: I0905 00:21:07.838919 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cebde53-a50f-40b3-8246-ec9a25623456-config-volume\") pod \"coredns-674b8bbfcf-lntfn\" (UID: \"4cebde53-a50f-40b3-8246-ec9a25623456\") " pod="kube-system/coredns-674b8bbfcf-lntfn" Sep 5 00:21:07.838957 kubelet[2562]: I0905 00:21:07.838962 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a245f3c0-169f-40da-8a24-943433622035-whisker-backend-key-pair\") pod \"whisker-78bc7854b7-rl9jb\" (UID: \"a245f3c0-169f-40da-8a24-943433622035\") " pod="calico-system/whisker-78bc7854b7-rl9jb" Sep 5 00:21:07.839116 kubelet[2562]: I0905 00:21:07.839000 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mph9b\" (UniqueName: \"kubernetes.io/projected/a245f3c0-169f-40da-8a24-943433622035-kube-api-access-mph9b\") pod \"whisker-78bc7854b7-rl9jb\" (UID: \"a245f3c0-169f-40da-8a24-943433622035\") " pod="calico-system/whisker-78bc7854b7-rl9jb" Sep 5 00:21:07.839116 kubelet[2562]: I0905 00:21:07.839022 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecf2b887-548a-428c-a0ae-a36f75934ba9-config\") pod \"goldmane-54d579b49d-k7qc4\" (UID: \"ecf2b887-548a-428c-a0ae-a36f75934ba9\") " pod="calico-system/goldmane-54d579b49d-k7qc4" Sep 5 00:21:07.839116 kubelet[2562]: I0905 00:21:07.839041 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecf2b887-548a-428c-a0ae-a36f75934ba9-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-k7qc4\" (UID: \"ecf2b887-548a-428c-a0ae-a36f75934ba9\") " pod="calico-system/goldmane-54d579b49d-k7qc4" Sep 5 00:21:07.839116 kubelet[2562]: I0905 00:21:07.839088 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4vc\" (UniqueName: \"kubernetes.io/projected/ecf2b887-548a-428c-a0ae-a36f75934ba9-kube-api-access-7t4vc\") pod \"goldmane-54d579b49d-k7qc4\" (UID: \"ecf2b887-548a-428c-a0ae-a36f75934ba9\") " pod="calico-system/goldmane-54d579b49d-k7qc4" Sep 5 00:21:07.839260 kubelet[2562]: I0905 00:21:07.839142 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4a619295-d161-4fe3-866f-7232abcadd6e-calico-apiserver-certs\") pod \"calico-apiserver-575c74c6d4-llg82\" (UID: \"4a619295-d161-4fe3-866f-7232abcadd6e\") " pod="calico-apiserver/calico-apiserver-575c74c6d4-llg82" Sep 5 00:21:07.839260 kubelet[2562]: I0905 00:21:07.839164 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6875358a-dc29-4093-b0ab-9707efb59b86-calico-apiserver-certs\") pod \"calico-apiserver-575c74c6d4-v8wf5\" (UID: \"6875358a-dc29-4093-b0ab-9707efb59b86\") " pod="calico-apiserver/calico-apiserver-575c74c6d4-v8wf5" Sep 5 00:21:07.839260 kubelet[2562]: I0905 00:21:07.839185 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e434d84b-5a4e-4e9f-be35-307612a171bb-tigera-ca-bundle\") pod \"calico-kube-controllers-7db846b456-d8kdg\" (UID: \"e434d84b-5a4e-4e9f-be35-307612a171bb\") " pod="calico-system/calico-kube-controllers-7db846b456-d8kdg" Sep 5 00:21:07.839260 kubelet[2562]: I0905 00:21:07.839209 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zf7z\" (UniqueName: \"kubernetes.io/projected/6875358a-dc29-4093-b0ab-9707efb59b86-kube-api-access-9zf7z\") pod \"calico-apiserver-575c74c6d4-v8wf5\" (UID: \"6875358a-dc29-4093-b0ab-9707efb59b86\") " pod="calico-apiserver/calico-apiserver-575c74c6d4-v8wf5" Sep 5 00:21:07.839260 kubelet[2562]: I0905 00:21:07.839232 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrlm\" (UniqueName: \"kubernetes.io/projected/4a619295-d161-4fe3-866f-7232abcadd6e-kube-api-access-ljrlm\") pod \"calico-apiserver-575c74c6d4-llg82\" (UID: \"4a619295-d161-4fe3-866f-7232abcadd6e\") " pod="calico-apiserver/calico-apiserver-575c74c6d4-llg82" Sep 5 00:21:07.839427 kubelet[2562]: I0905 00:21:07.839252 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wwcn\" (UniqueName: \"kubernetes.io/projected/e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3-kube-api-access-5wwcn\") pod \"coredns-674b8bbfcf-f6n82\" (UID: \"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3\") " pod="kube-system/coredns-674b8bbfcf-f6n82" Sep 5 00:21:07.839427 kubelet[2562]: I0905 00:21:07.839276 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwd9\" (UniqueName: \"kubernetes.io/projected/4cebde53-a50f-40b3-8246-ec9a25623456-kube-api-access-nmwd9\") pod \"coredns-674b8bbfcf-lntfn\" (UID: \"4cebde53-a50f-40b3-8246-ec9a25623456\") " pod="kube-system/coredns-674b8bbfcf-lntfn" Sep 5 00:21:07.839427 kubelet[2562]: I0905 00:21:07.839302 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ecf2b887-548a-428c-a0ae-a36f75934ba9-goldmane-key-pair\") pod \"goldmane-54d579b49d-k7qc4\" (UID: \"ecf2b887-548a-428c-a0ae-a36f75934ba9\") " pod="calico-system/goldmane-54d579b49d-k7qc4" Sep 5 00:21:07.839427 kubelet[2562]: I0905 00:21:07.839323 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a245f3c0-169f-40da-8a24-943433622035-whisker-ca-bundle\") pod \"whisker-78bc7854b7-rl9jb\" (UID: \"a245f3c0-169f-40da-8a24-943433622035\") " pod="calico-system/whisker-78bc7854b7-rl9jb" Sep 5 00:21:07.839427 kubelet[2562]: I0905 00:21:07.839357 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3-config-volume\") pod \"coredns-674b8bbfcf-f6n82\" (UID: \"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3\") " pod="kube-system/coredns-674b8bbfcf-f6n82" Sep 5 00:21:07.839579 kubelet[2562]: I0905 00:21:07.839397 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gb7\" (UniqueName: \"kubernetes.io/projected/e434d84b-5a4e-4e9f-be35-307612a171bb-kube-api-access-f5gb7\") pod \"calico-kube-controllers-7db846b456-d8kdg\" (UID: \"e434d84b-5a4e-4e9f-be35-307612a171bb\") " pod="calico-system/calico-kube-controllers-7db846b456-d8kdg" Sep 5 00:21:07.857426 kubelet[2562]: I0905 00:21:07.857374 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:07.857627 containerd[1469]: time="2025-09-05T00:21:07.857598093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:21:07.857955 containerd[1469]: time="2025-09-05T00:21:07.857927248Z" level=info msg="StopPodSandbox for \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\"" Sep 5 00:21:07.858107 containerd[1469]: time="2025-09-05T00:21:07.858087867Z" level=info msg="Ensure that sandbox 53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da in task-service has been cleanup successfully" Sep 5 00:21:07.887233 containerd[1469]: time="2025-09-05T00:21:07.887175946Z" level=error msg="StopPodSandbox for \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\" failed" error="failed to destroy network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:07.887436 kubelet[2562]: E0905 00:21:07.887391 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:07.887506 kubelet[2562]: E0905 00:21:07.887459 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da"} Sep 5 00:21:07.887546 kubelet[2562]: E0905 00:21:07.887525 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c21203f-d0d0-4ce5-8193-33854aaca356\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:07.887610 kubelet[2562]: E0905 00:21:07.887557 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c21203f-d0d0-4ce5-8193-33854aaca356\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ps2kf" podUID="9c21203f-d0d0-4ce5-8193-33854aaca356" Sep 5 00:21:07.996673 kubelet[2562]: E0905 00:21:07.996193 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:07.998263 containerd[1469]: time="2025-09-05T00:21:07.998222163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lntfn,Uid:4cebde53-a50f-40b3-8246-ec9a25623456,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:08.018908 kubelet[2562]: E0905 00:21:08.018769 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:08.019682 containerd[1469]: time="2025-09-05T00:21:08.019540998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6n82,Uid:e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:08.053191 containerd[1469]: time="2025-09-05T00:21:08.053143714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78bc7854b7-rl9jb,Uid:a245f3c0-169f-40da-8a24-943433622035,Namespace:calico-system,Attempt:0,}" Sep 5 00:21:08.059646 containerd[1469]: time="2025-09-05T00:21:08.059599696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-v8wf5,Uid:6875358a-dc29-4093-b0ab-9707efb59b86,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:21:08.066326 containerd[1469]: time="2025-09-05T00:21:08.066271586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-llg82,Uid:4a619295-d161-4fe3-866f-7232abcadd6e,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:21:08.070089 containerd[1469]: time="2025-09-05T00:21:08.069896701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db846b456-d8kdg,Uid:e434d84b-5a4e-4e9f-be35-307612a171bb,Namespace:calico-system,Attempt:0,}" Sep 5 00:21:08.078158 containerd[1469]: time="2025-09-05T00:21:08.077941921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-k7qc4,Uid:ecf2b887-548a-428c-a0ae-a36f75934ba9,Namespace:calico-system,Attempt:0,}" Sep 5 00:21:08.091916 containerd[1469]: time="2025-09-05T00:21:08.091827489Z" level=error msg="Failed to destroy network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.093651 containerd[1469]: time="2025-09-05T00:21:08.093533079Z" level=error msg="encountered an error cleaning up failed sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.093904 containerd[1469]: time="2025-09-05T00:21:08.093621605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lntfn,Uid:4cebde53-a50f-40b3-8246-ec9a25623456,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.094881 kubelet[2562]: E0905 00:21:08.094319 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.094881 kubelet[2562]: E0905 00:21:08.094394 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lntfn" Sep 5 00:21:08.094881 kubelet[2562]: E0905 00:21:08.094424 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lntfn" Sep 5 00:21:08.095042 kubelet[2562]: E0905 00:21:08.094486 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lntfn_kube-system(4cebde53-a50f-40b3-8246-ec9a25623456)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lntfn_kube-system(4cebde53-a50f-40b3-8246-ec9a25623456)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lntfn" podUID="4cebde53-a50f-40b3-8246-ec9a25623456" Sep 5 00:21:08.097325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811-shm.mount: Deactivated successfully. Sep 5 00:21:08.150279 containerd[1469]: time="2025-09-05T00:21:08.150220185Z" level=error msg="Failed to destroy network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.150722 containerd[1469]: time="2025-09-05T00:21:08.150692253Z" level=error msg="encountered an error cleaning up failed sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.150793 containerd[1469]: time="2025-09-05T00:21:08.150755549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6n82,Uid:e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.151054 kubelet[2562]: E0905 00:21:08.151012 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.151137 kubelet[2562]: E0905 00:21:08.151083 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f6n82" Sep 5 00:21:08.151137 kubelet[2562]: E0905 00:21:08.151118 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f6n82" Sep 5 00:21:08.151214 kubelet[2562]: E0905 00:21:08.151178 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f6n82_kube-system(e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f6n82_kube-system(e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f6n82" podUID="e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3" Sep 5 00:21:08.165809 containerd[1469]: time="2025-09-05T00:21:08.164212776Z" level=error msg="Failed to destroy network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.165809 containerd[1469]: time="2025-09-05T00:21:08.165611014Z" level=error msg="encountered an error cleaning up failed sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.166011 containerd[1469]: time="2025-09-05T00:21:08.165977703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78bc7854b7-rl9jb,Uid:a245f3c0-169f-40da-8a24-943433622035,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.166540 kubelet[2562]: E0905 00:21:08.166481 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.166611 kubelet[2562]: E0905 00:21:08.166567 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78bc7854b7-rl9jb" Sep 5 00:21:08.166611 kubelet[2562]: E0905 00:21:08.166597 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78bc7854b7-rl9jb" Sep 5 00:21:08.166709 kubelet[2562]: E0905 00:21:08.166669 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78bc7854b7-rl9jb_calico-system(a245f3c0-169f-40da-8a24-943433622035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78bc7854b7-rl9jb_calico-system(a245f3c0-169f-40da-8a24-943433622035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78bc7854b7-rl9jb" podUID="a245f3c0-169f-40da-8a24-943433622035" Sep 5 00:21:08.218239 containerd[1469]: time="2025-09-05T00:21:08.218145438Z" level=error msg="Failed to destroy network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.219374 containerd[1469]: time="2025-09-05T00:21:08.219072911Z" level=error msg="encountered an error cleaning up failed sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.219374 containerd[1469]: time="2025-09-05T00:21:08.219222588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-llg82,Uid:4a619295-d161-4fe3-866f-7232abcadd6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.219663 kubelet[2562]: E0905 00:21:08.219506 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.219663 kubelet[2562]: E0905 00:21:08.219578 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575c74c6d4-llg82" Sep 5 00:21:08.219663 kubelet[2562]: E0905 00:21:08.219610 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575c74c6d4-llg82" Sep 5 00:21:08.219845 kubelet[2562]: E0905 00:21:08.219676 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575c74c6d4-llg82_calico-apiserver(4a619295-d161-4fe3-866f-7232abcadd6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575c74c6d4-llg82_calico-apiserver(4a619295-d161-4fe3-866f-7232abcadd6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575c74c6d4-llg82" podUID="4a619295-d161-4fe3-866f-7232abcadd6e" Sep 5 00:21:08.227650 containerd[1469]: time="2025-09-05T00:21:08.227497865Z" level=error msg="Failed to destroy network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.228136 containerd[1469]: time="2025-09-05T00:21:08.228110032Z" level=error msg="encountered an error cleaning up failed sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.228256 containerd[1469]: time="2025-09-05T00:21:08.228233939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db846b456-d8kdg,Uid:e434d84b-5a4e-4e9f-be35-307612a171bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.229019 kubelet[2562]: E0905 00:21:08.228548 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.229019 kubelet[2562]: E0905 00:21:08.228610 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db846b456-d8kdg" Sep 5 00:21:08.229019 kubelet[2562]: E0905 00:21:08.228631 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db846b456-d8kdg" Sep 5 00:21:08.229185 kubelet[2562]: E0905 00:21:08.228677 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7db846b456-d8kdg_calico-system(e434d84b-5a4e-4e9f-be35-307612a171bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7db846b456-d8kdg_calico-system(e434d84b-5a4e-4e9f-be35-307612a171bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db846b456-d8kdg" podUID="e434d84b-5a4e-4e9f-be35-307612a171bb" Sep 5 00:21:08.230203 containerd[1469]: time="2025-09-05T00:21:08.230145698Z" level=error msg="Failed to destroy network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.230763 containerd[1469]: time="2025-09-05T00:21:08.230645392Z" level=error msg="encountered an error cleaning up failed sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.230763 containerd[1469]: time="2025-09-05T00:21:08.230690201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-k7qc4,Uid:ecf2b887-548a-428c-a0ae-a36f75934ba9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.231204 kubelet[2562]: E0905 00:21:08.231079 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.231204 kubelet[2562]: E0905 00:21:08.231150 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-k7qc4" Sep 5 00:21:08.231204 kubelet[2562]: E0905 00:21:08.231171 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-k7qc4" Sep 5 00:21:08.233477 kubelet[2562]: E0905 00:21:08.231221 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-k7qc4_calico-system(ecf2b887-548a-428c-a0ae-a36f75934ba9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-k7qc4_calico-system(ecf2b887-548a-428c-a0ae-a36f75934ba9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-k7qc4" podUID="ecf2b887-548a-428c-a0ae-a36f75934ba9" Sep 5 00:21:08.237503 containerd[1469]: time="2025-09-05T00:21:08.237447030Z" level=error msg="Failed to destroy network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.237966 containerd[1469]: time="2025-09-05T00:21:08.237934689Z" level=error msg="encountered an error cleaning up failed sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.238031 containerd[1469]: time="2025-09-05T00:21:08.238001622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-v8wf5,Uid:6875358a-dc29-4093-b0ab-9707efb59b86,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.238236 kubelet[2562]: E0905 00:21:08.238200 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.238287 kubelet[2562]: E0905 00:21:08.238258 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575c74c6d4-v8wf5" Sep 5 00:21:08.238312 kubelet[2562]: E0905 00:21:08.238284 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575c74c6d4-v8wf5" Sep 5 00:21:08.238396 kubelet[2562]: E0905 00:21:08.238362 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575c74c6d4-v8wf5_calico-apiserver(6875358a-dc29-4093-b0ab-9707efb59b86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575c74c6d4-v8wf5_calico-apiserver(6875358a-dc29-4093-b0ab-9707efb59b86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575c74c6d4-v8wf5" podUID="6875358a-dc29-4093-b0ab-9707efb59b86" Sep 5 00:21:08.860704 kubelet[2562]: I0905 00:21:08.860653 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:08.861573 containerd[1469]: time="2025-09-05T00:21:08.861529724Z" level=info msg="StopPodSandbox for \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\"" Sep 5 00:21:08.861881 containerd[1469]: time="2025-09-05T00:21:08.861710703Z" level=info msg="Ensure that sandbox 79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5 in task-service has been cleanup successfully" Sep 5 00:21:08.861909 kubelet[2562]: I0905 00:21:08.861692 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:08.862222 containerd[1469]: time="2025-09-05T00:21:08.862169395Z" level=info msg="StopPodSandbox for \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\"" Sep 5 00:21:08.862384 containerd[1469]: time="2025-09-05T00:21:08.862359894Z" level=info msg="Ensure that sandbox 3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024 in task-service has been cleanup successfully" Sep 5 00:21:08.863790 kubelet[2562]: I0905 00:21:08.863500 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:08.865590 containerd[1469]: time="2025-09-05T00:21:08.865154939Z" level=info msg="StopPodSandbox for \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\"" Sep 5 00:21:08.865590 containerd[1469]: time="2025-09-05T00:21:08.865327111Z" level=info msg="Ensure that sandbox 6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811 in task-service has been cleanup successfully" Sep 5 00:21:08.866555 kubelet[2562]: I0905 00:21:08.866516 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:08.867314 containerd[1469]: time="2025-09-05T00:21:08.867293099Z" level=info msg="StopPodSandbox for \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\"" Sep 5 00:21:08.867732 containerd[1469]: time="2025-09-05T00:21:08.867694216Z" level=info msg="Ensure that sandbox 6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d in task-service has been cleanup successfully" Sep 5 00:21:08.869446 kubelet[2562]: I0905 00:21:08.868743 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:08.869898 containerd[1469]: time="2025-09-05T00:21:08.869856864Z" level=info msg="StopPodSandbox for \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\"" Sep 5 00:21:08.871027 containerd[1469]: time="2025-09-05T00:21:08.870980748Z" level=info msg="Ensure that sandbox 86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534 in task-service has been cleanup successfully" Sep 5 00:21:08.872079 kubelet[2562]: I0905 00:21:08.872057 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:08.873967 containerd[1469]: time="2025-09-05T00:21:08.872581319Z" level=info msg="StopPodSandbox for \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\"" Sep 5 00:21:08.874100 containerd[1469]: time="2025-09-05T00:21:08.874076600Z" level=info msg="Ensure that sandbox 13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d in task-service has been cleanup successfully" Sep 5 00:21:08.906228 kubelet[2562]: I0905 00:21:08.905839 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:08.908080 containerd[1469]: time="2025-09-05T00:21:08.906835480Z" level=info msg="StopPodSandbox for \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\"" Sep 5 00:21:08.908080 containerd[1469]: time="2025-09-05T00:21:08.907011610Z" level=info msg="Ensure that sandbox 3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034 in task-service has been cleanup successfully" Sep 5 00:21:08.934596 containerd[1469]: time="2025-09-05T00:21:08.933934970Z" level=error msg="StopPodSandbox for \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\" failed" error="failed to destroy network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.935171 kubelet[2562]: E0905 00:21:08.935132 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:08.935385 kubelet[2562]: E0905 00:21:08.935253 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5"} Sep 5 00:21:08.935385 kubelet[2562]: E0905 00:21:08.935292 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e434d84b-5a4e-4e9f-be35-307612a171bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:08.935385 kubelet[2562]: E0905 00:21:08.935352 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e434d84b-5a4e-4e9f-be35-307612a171bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db846b456-d8kdg" podUID="e434d84b-5a4e-4e9f-be35-307612a171bb" Sep 5 00:21:08.941078 containerd[1469]: time="2025-09-05T00:21:08.941032066Z" level=error msg="StopPodSandbox for \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\" failed" error="failed to destroy network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.941330 containerd[1469]: time="2025-09-05T00:21:08.941090322Z" level=error msg="StopPodSandbox for \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\" failed" error="failed to destroy network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.941553 kubelet[2562]: E0905 00:21:08.941524 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:08.941638 kubelet[2562]: E0905 00:21:08.941620 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024"} Sep 5 00:21:08.941706 kubelet[2562]: E0905 00:21:08.941693 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a245f3c0-169f-40da-8a24-943433622035\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:08.941833 kubelet[2562]: E0905 00:21:08.941565 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:08.941882 kubelet[2562]: E0905 00:21:08.941867 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811"} Sep 5 00:21:08.941931 kubelet[2562]: E0905 00:21:08.941904 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cebde53-a50f-40b3-8246-ec9a25623456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:08.941980 kubelet[2562]: E0905 00:21:08.941948 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cebde53-a50f-40b3-8246-ec9a25623456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lntfn" podUID="4cebde53-a50f-40b3-8246-ec9a25623456" Sep 5 00:21:08.941980 kubelet[2562]: E0905 00:21:08.941803 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a245f3c0-169f-40da-8a24-943433622035\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78bc7854b7-rl9jb" podUID="a245f3c0-169f-40da-8a24-943433622035" Sep 5 00:21:08.948622 containerd[1469]: time="2025-09-05T00:21:08.947202740Z" level=error msg="StopPodSandbox for \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\" failed" error="failed to destroy network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.948728 containerd[1469]: time="2025-09-05T00:21:08.948685117Z" level=error msg="StopPodSandbox for \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\" failed" error="failed to destroy network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.948840 kubelet[2562]: E0905 00:21:08.948808 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:08.948934 kubelet[2562]: E0905 00:21:08.948900 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d"} Sep 5 00:21:08.948986 kubelet[2562]: E0905 00:21:08.948940 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:08.948986 kubelet[2562]: E0905 00:21:08.948964 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f6n82" podUID="e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3" Sep 5 00:21:08.948986 kubelet[2562]: E0905 00:21:08.948823 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:08.949121 kubelet[2562]: E0905 00:21:08.948988 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d"} Sep 5 00:21:08.949121 kubelet[2562]: E0905 00:21:08.949004 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecf2b887-548a-428c-a0ae-a36f75934ba9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:08.949121 kubelet[2562]: E0905 00:21:08.949020 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecf2b887-548a-428c-a0ae-a36f75934ba9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-k7qc4" podUID="ecf2b887-548a-428c-a0ae-a36f75934ba9" Sep 5 00:21:08.965808 containerd[1469]: time="2025-09-05T00:21:08.964938420Z" level=error msg="StopPodSandbox for \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\" failed" error="failed to destroy network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.965963 kubelet[2562]: E0905 00:21:08.965221 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:08.965963 kubelet[2562]: E0905 00:21:08.965281 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034"} Sep 5 00:21:08.965963 kubelet[2562]: E0905 00:21:08.965313 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6875358a-dc29-4093-b0ab-9707efb59b86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:08.965963 kubelet[2562]: E0905 00:21:08.965338 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6875358a-dc29-4093-b0ab-9707efb59b86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575c74c6d4-v8wf5" podUID="6875358a-dc29-4093-b0ab-9707efb59b86" Sep 5 00:21:08.975056 containerd[1469]: time="2025-09-05T00:21:08.975001440Z" level=error msg="StopPodSandbox for \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\" failed" error="failed to destroy network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:21:08.975291 kubelet[2562]: E0905 00:21:08.975251 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:08.975334 kubelet[2562]: E0905 00:21:08.975306 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534"} Sep 5 00:21:08.975382 kubelet[2562]: E0905 00:21:08.975343 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a619295-d161-4fe3-866f-7232abcadd6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:21:08.975382 kubelet[2562]: E0905 00:21:08.975368 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a619295-d161-4fe3-866f-7232abcadd6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575c74c6d4-llg82" podUID="4a619295-d161-4fe3-866f-7232abcadd6e" Sep 5 00:21:09.053950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5-shm.mount: Deactivated successfully. Sep 5 00:21:09.054101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034-shm.mount: Deactivated successfully. Sep 5 00:21:09.054212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534-shm.mount: Deactivated successfully. Sep 5 00:21:09.054330 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024-shm.mount: Deactivated successfully. Sep 5 00:21:09.054450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d-shm.mount: Deactivated successfully. Sep 5 00:21:13.935584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430674055.mount: Deactivated successfully. Sep 5 00:21:16.604112 containerd[1469]: time="2025-09-05T00:21:16.604030038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:16.605638 containerd[1469]: time="2025-09-05T00:21:16.605583742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:21:16.607836 containerd[1469]: time="2025-09-05T00:21:16.607299946Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:16.610271 containerd[1469]: time="2025-09-05T00:21:16.610229983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:16.611305 containerd[1469]: time="2025-09-05T00:21:16.611256413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.753617529s" Sep 5 00:21:16.611377 containerd[1469]: time="2025-09-05T00:21:16.611310620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:21:16.627793 containerd[1469]: time="2025-09-05T00:21:16.627724029Z" level=info msg="CreateContainer within sandbox \"3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:21:17.105692 containerd[1469]: time="2025-09-05T00:21:17.105522313Z" level=info msg="CreateContainer within sandbox \"3d51b9bf9d386ef4bcd3a71a71254a15b371148fdb6ed6a0b7db06227ad99f9b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a72eb705927b0a6fbf999f5667097e43f7d9372b1771b7a693dc915f29f95634\"" Sep 5 00:21:17.107968 containerd[1469]: time="2025-09-05T00:21:17.106701795Z" level=info msg="StartContainer for \"a72eb705927b0a6fbf999f5667097e43f7d9372b1771b7a693dc915f29f95634\"" Sep 5 00:21:17.141080 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:56024.service - OpenSSH per-connection server daemon (10.0.0.1:56024). Sep 5 00:21:17.170933 systemd[1]: Started cri-containerd-a72eb705927b0a6fbf999f5667097e43f7d9372b1771b7a693dc915f29f95634.scope - libcontainer container a72eb705927b0a6fbf999f5667097e43f7d9372b1771b7a693dc915f29f95634. Sep 5 00:21:17.191729 sshd[3810]: Accepted publickey for core from 10.0.0.1 port 56024 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:17.193487 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:17.198882 systemd-logind[1451]: New session 10 of user core. Sep 5 00:21:17.205107 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:21:17.209450 containerd[1469]: time="2025-09-05T00:21:17.209415853Z" level=info msg="StartContainer for \"a72eb705927b0a6fbf999f5667097e43f7d9372b1771b7a693dc915f29f95634\" returns successfully" Sep 5 00:21:17.302811 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:21:17.302963 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:21:17.374008 sshd[3810]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:17.379604 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:56024.service: Deactivated successfully. Sep 5 00:21:17.379972 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:21:17.385336 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:21:17.390295 systemd-logind[1451]: Removed session 10. Sep 5 00:21:17.391495 containerd[1469]: time="2025-09-05T00:21:17.390694756Z" level=info msg="StopPodSandbox for \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\"" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.473 [INFO][3881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.473 [INFO][3881] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" iface="eth0" netns="/var/run/netns/cni-eb02bc1d-7513-f1c8-4199-28c871290bf8" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.473 [INFO][3881] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" iface="eth0" netns="/var/run/netns/cni-eb02bc1d-7513-f1c8-4199-28c871290bf8" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.475 [INFO][3881] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" iface="eth0" netns="/var/run/netns/cni-eb02bc1d-7513-f1c8-4199-28c871290bf8" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.475 [INFO][3881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.475 [INFO][3881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.555 [INFO][3890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.556 [INFO][3890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.556 [INFO][3890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.563 [WARNING][3890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.563 [INFO][3890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.564 [INFO][3890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:17.571405 containerd[1469]: 2025-09-05 00:21:17.568 [INFO][3881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:17.571994 containerd[1469]: time="2025-09-05T00:21:17.571620792Z" level=info msg="TearDown network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\" successfully" Sep 5 00:21:17.571994 containerd[1469]: time="2025-09-05T00:21:17.571659398Z" level=info msg="StopPodSandbox for \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\" returns successfully" Sep 5 00:21:17.619858 systemd[1]: run-netns-cni\x2deb02bc1d\x2d7513\x2df1c8\x2d4199\x2d28c871290bf8.mount: Deactivated successfully. Sep 5 00:21:17.698497 kubelet[2562]: I0905 00:21:17.698261 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a245f3c0-169f-40da-8a24-943433622035-whisker-backend-key-pair\") pod \"a245f3c0-169f-40da-8a24-943433622035\" (UID: \"a245f3c0-169f-40da-8a24-943433622035\") " Sep 5 00:21:17.698497 kubelet[2562]: I0905 00:21:17.698339 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a245f3c0-169f-40da-8a24-943433622035-whisker-ca-bundle\") pod \"a245f3c0-169f-40da-8a24-943433622035\" (UID: \"a245f3c0-169f-40da-8a24-943433622035\") " Sep 5 00:21:17.698497 kubelet[2562]: I0905 00:21:17.698378 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mph9b\" (UniqueName: \"kubernetes.io/projected/a245f3c0-169f-40da-8a24-943433622035-kube-api-access-mph9b\") pod \"a245f3c0-169f-40da-8a24-943433622035\" (UID: \"a245f3c0-169f-40da-8a24-943433622035\") " Sep 5 00:21:17.699151 kubelet[2562]: I0905 00:21:17.698901 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a245f3c0-169f-40da-8a24-943433622035-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a245f3c0-169f-40da-8a24-943433622035" (UID: "a245f3c0-169f-40da-8a24-943433622035"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:21:17.703982 kubelet[2562]: I0905 00:21:17.703930 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a245f3c0-169f-40da-8a24-943433622035-kube-api-access-mph9b" (OuterVolumeSpecName: "kube-api-access-mph9b") pod "a245f3c0-169f-40da-8a24-943433622035" (UID: "a245f3c0-169f-40da-8a24-943433622035"). InnerVolumeSpecName "kube-api-access-mph9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:21:17.704039 kubelet[2562]: I0905 00:21:17.703947 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a245f3c0-169f-40da-8a24-943433622035-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a245f3c0-169f-40da-8a24-943433622035" (UID: "a245f3c0-169f-40da-8a24-943433622035"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:21:17.704991 systemd[1]: var-lib-kubelet-pods-a245f3c0\x2d169f\x2d40da\x2d8a24\x2d943433622035-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmph9b.mount: Deactivated successfully. Sep 5 00:21:17.705114 systemd[1]: var-lib-kubelet-pods-a245f3c0\x2d169f\x2d40da\x2d8a24\x2d943433622035-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:21:17.799141 kubelet[2562]: I0905 00:21:17.799093 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mph9b\" (UniqueName: \"kubernetes.io/projected/a245f3c0-169f-40da-8a24-943433622035-kube-api-access-mph9b\") on node \"localhost\" DevicePath \"\"" Sep 5 00:21:17.799141 kubelet[2562]: I0905 00:21:17.799127 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a245f3c0-169f-40da-8a24-943433622035-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:21:17.799141 kubelet[2562]: I0905 00:21:17.799141 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a245f3c0-169f-40da-8a24-943433622035-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:21:17.931293 systemd[1]: Removed slice kubepods-besteffort-poda245f3c0_169f_40da_8a24_943433622035.slice - libcontainer container kubepods-besteffort-poda245f3c0_169f_40da_8a24_943433622035.slice. Sep 5 00:21:17.945303 kubelet[2562]: I0905 00:21:17.945222 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ms7m7" podStartSLOduration=2.7645689190000002 podStartE2EDuration="24.945207316s" podCreationTimestamp="2025-09-05 00:20:53 +0000 UTC" firstStartedPulling="2025-09-05 00:20:54.431950432 +0000 UTC m=+18.213643163" lastFinishedPulling="2025-09-05 00:21:16.612588829 +0000 UTC m=+40.394281560" observedRunningTime="2025-09-05 00:21:17.944767216 +0000 UTC m=+41.726459957" watchObservedRunningTime="2025-09-05 00:21:17.945207316 +0000 UTC m=+41.726900047" Sep 5 00:21:18.278051 systemd[1]: Created slice kubepods-besteffort-podd8574fc1_dadc_45d2_822c_57c88fe5d56c.slice - libcontainer container kubepods-besteffort-podd8574fc1_dadc_45d2_822c_57c88fe5d56c.slice. Sep 5 00:21:18.301082 kubelet[2562]: I0905 00:21:18.300997 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwvz\" (UniqueName: \"kubernetes.io/projected/d8574fc1-dadc-45d2-822c-57c88fe5d56c-kube-api-access-bwwvz\") pod \"whisker-755bcd44d6-97wb8\" (UID: \"d8574fc1-dadc-45d2-822c-57c88fe5d56c\") " pod="calico-system/whisker-755bcd44d6-97wb8" Sep 5 00:21:18.301189 kubelet[2562]: I0905 00:21:18.301098 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8574fc1-dadc-45d2-822c-57c88fe5d56c-whisker-backend-key-pair\") pod \"whisker-755bcd44d6-97wb8\" (UID: \"d8574fc1-dadc-45d2-822c-57c88fe5d56c\") " pod="calico-system/whisker-755bcd44d6-97wb8" Sep 5 00:21:18.301189 kubelet[2562]: I0905 00:21:18.301118 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8574fc1-dadc-45d2-822c-57c88fe5d56c-whisker-ca-bundle\") pod \"whisker-755bcd44d6-97wb8\" (UID: \"d8574fc1-dadc-45d2-822c-57c88fe5d56c\") " pod="calico-system/whisker-755bcd44d6-97wb8" Sep 5 00:21:18.322720 kubelet[2562]: I0905 00:21:18.322645 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a245f3c0-169f-40da-8a24-943433622035" path="/var/lib/kubelet/pods/a245f3c0-169f-40da-8a24-943433622035/volumes" Sep 5 00:21:18.582630 containerd[1469]: time="2025-09-05T00:21:18.582497125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755bcd44d6-97wb8,Uid:d8574fc1-dadc-45d2-822c-57c88fe5d56c,Namespace:calico-system,Attempt:0,}" Sep 5 00:21:18.783963 systemd-networkd[1387]: cali7e4db9f7464: Link UP Sep 5 00:21:18.787976 systemd-networkd[1387]: cali7e4db9f7464: Gained carrier Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.650 [INFO][3932] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.663 [INFO][3932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--755bcd44d6--97wb8-eth0 whisker-755bcd44d6- calico-system d8574fc1-dadc-45d2-822c-57c88fe5d56c 950 0 2025-09-05 00:21:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:755bcd44d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-755bcd44d6-97wb8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7e4db9f7464 [] [] }} ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.663 [INFO][3932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.719 [INFO][4005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" HandleID="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Workload="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.719 [INFO][4005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" HandleID="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Workload="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-755bcd44d6-97wb8", "timestamp":"2025-09-05 00:21:18.71898381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.719 [INFO][4005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.719 [INFO][4005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.719 [INFO][4005] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.726 [INFO][4005] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.736 [INFO][4005] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.743 [INFO][4005] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.748 [INFO][4005] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.753 [INFO][4005] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.753 [INFO][4005] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.756 [INFO][4005] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600 Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.761 [INFO][4005] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.766 [INFO][4005] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.766 [INFO][4005] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" host="localhost" Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.767 [INFO][4005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:18.820806 containerd[1469]: 2025-09-05 00:21:18.767 [INFO][4005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" HandleID="k8s-pod-network.4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Workload="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" Sep 5 00:21:18.821726 containerd[1469]: 2025-09-05 00:21:18.771 [INFO][3932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--755bcd44d6--97wb8-eth0", GenerateName:"whisker-755bcd44d6-", Namespace:"calico-system", SelfLink:"", UID:"d8574fc1-dadc-45d2-822c-57c88fe5d56c", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"755bcd44d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-755bcd44d6-97wb8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e4db9f7464", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:18.821726 containerd[1469]: 2025-09-05 00:21:18.771 [INFO][3932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" Sep 5 00:21:18.821726 containerd[1469]: 2025-09-05 00:21:18.771 [INFO][3932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e4db9f7464 ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" Sep 5 00:21:18.821726 containerd[1469]: 2025-09-05 00:21:18.788 [INFO][3932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" Sep 5 00:21:18.821726 containerd[1469]: 2025-09-05 00:21:18.788 [INFO][3932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--755bcd44d6--97wb8-eth0", GenerateName:"whisker-755bcd44d6-", Namespace:"calico-system", SelfLink:"", UID:"d8574fc1-dadc-45d2-822c-57c88fe5d56c", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"755bcd44d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600", Pod:"whisker-755bcd44d6-97wb8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e4db9f7464", MAC:"62:30:68:66:96:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:18.821726 containerd[1469]: 2025-09-05 00:21:18.807 [INFO][3932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600" Namespace="calico-system" Pod="whisker-755bcd44d6-97wb8" WorkloadEndpoint="localhost-k8s-whisker--755bcd44d6--97wb8-eth0" Sep 5 00:21:18.860823 containerd[1469]: time="2025-09-05T00:21:18.857370037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:18.860823 containerd[1469]: time="2025-09-05T00:21:18.857428481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:18.860823 containerd[1469]: time="2025-09-05T00:21:18.857438441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.860823 containerd[1469]: time="2025-09-05T00:21:18.857509971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.884200 systemd[1]: Started cri-containerd-4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600.scope - libcontainer container 4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600. Sep 5 00:21:18.896841 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:18.925466 containerd[1469]: time="2025-09-05T00:21:18.925422505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755bcd44d6-97wb8,Uid:d8574fc1-dadc-45d2-822c-57c88fe5d56c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600\"" Sep 5 00:21:18.927581 containerd[1469]: time="2025-09-05T00:21:18.927407725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:21:19.320930 containerd[1469]: time="2025-09-05T00:21:19.320599253Z" level=info msg="StopPodSandbox for \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\"" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.364 [INFO][4116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.365 [INFO][4116] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" iface="eth0" netns="/var/run/netns/cni-23635f5b-2182-c328-4b5f-4ed6b804ef08" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.365 [INFO][4116] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" iface="eth0" netns="/var/run/netns/cni-23635f5b-2182-c328-4b5f-4ed6b804ef08" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.365 [INFO][4116] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" iface="eth0" netns="/var/run/netns/cni-23635f5b-2182-c328-4b5f-4ed6b804ef08" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.365 [INFO][4116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.365 [INFO][4116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.387 [INFO][4124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.387 [INFO][4124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.387 [INFO][4124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.394 [WARNING][4124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.394 [INFO][4124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.395 [INFO][4124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:19.400960 containerd[1469]: 2025-09-05 00:21:19.398 [INFO][4116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:19.401691 containerd[1469]: time="2025-09-05T00:21:19.401643693Z" level=info msg="TearDown network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\" successfully" Sep 5 00:21:19.401744 containerd[1469]: time="2025-09-05T00:21:19.401690889Z" level=info msg="StopPodSandbox for \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\" returns successfully" Sep 5 00:21:19.402450 containerd[1469]: time="2025-09-05T00:21:19.402415824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-v8wf5,Uid:6875358a-dc29-4093-b0ab-9707efb59b86,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:21:19.405280 systemd[1]: run-netns-cni\x2d23635f5b\x2d2182\x2dc328\x2d4b5f\x2d4ed6b804ef08.mount: Deactivated successfully. Sep 5 00:21:19.548677 systemd-networkd[1387]: calid2386624047: Link UP Sep 5 00:21:19.549753 systemd-networkd[1387]: calid2386624047: Gained carrier Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.442 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.454 [INFO][4133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0 calico-apiserver-575c74c6d4- calico-apiserver 6875358a-dc29-4093-b0ab-9707efb59b86 957 0 2025-09-05 00:20:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575c74c6d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-575c74c6d4-v8wf5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid2386624047 [] [] }} ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.454 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.482 [INFO][4146] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" HandleID="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.482 [INFO][4146] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" HandleID="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-575c74c6d4-v8wf5", "timestamp":"2025-09-05 00:21:19.482196943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.482 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.482 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.482 [INFO][4146] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.491 [INFO][4146] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.495 [INFO][4146] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.499 [INFO][4146] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.501 [INFO][4146] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.503 [INFO][4146] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.503 [INFO][4146] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.504 [INFO][4146] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.510 [INFO][4146] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.543 [INFO][4146] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.543 [INFO][4146] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" host="localhost" Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.543 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:19.562124 containerd[1469]: 2025-09-05 00:21:19.543 [INFO][4146] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" HandleID="k8s-pod-network.84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.562970 containerd[1469]: 2025-09-05 00:21:19.546 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6875358a-dc29-4093-b0ab-9707efb59b86", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-575c74c6d4-v8wf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2386624047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:19.562970 containerd[1469]: 2025-09-05 00:21:19.547 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.562970 containerd[1469]: 2025-09-05 00:21:19.547 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2386624047 ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.562970 containerd[1469]: 2025-09-05 00:21:19.549 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.562970 containerd[1469]: 2025-09-05 00:21:19.549 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6875358a-dc29-4093-b0ab-9707efb59b86", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a", Pod:"calico-apiserver-575c74c6d4-v8wf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2386624047", MAC:"ca:f7:29:02:5f:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:19.562970 containerd[1469]: 2025-09-05 00:21:19.557 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-v8wf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:19.583654 containerd[1469]: time="2025-09-05T00:21:19.582014215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:19.583654 containerd[1469]: time="2025-09-05T00:21:19.582129003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:19.583654 containerd[1469]: time="2025-09-05T00:21:19.582145293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:19.583654 containerd[1469]: time="2025-09-05T00:21:19.582269117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:19.604948 systemd[1]: Started cri-containerd-84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a.scope - libcontainer container 84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a. Sep 5 00:21:19.623966 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:19.650531 containerd[1469]: time="2025-09-05T00:21:19.650494765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-v8wf5,Uid:6875358a-dc29-4093-b0ab-9707efb59b86,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a\"" Sep 5 00:21:20.153239 systemd-networkd[1387]: cali7e4db9f7464: Gained IPv6LL Sep 5 00:21:20.518282 containerd[1469]: time="2025-09-05T00:21:20.518221633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:20.519023 containerd[1469]: time="2025-09-05T00:21:20.518955468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:21:20.520181 containerd[1469]: time="2025-09-05T00:21:20.520134501Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:20.522356 containerd[1469]: time="2025-09-05T00:21:20.522288748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:20.523238 containerd[1469]: time="2025-09-05T00:21:20.523184296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.595716996s" Sep 5 00:21:20.523238 containerd[1469]: time="2025-09-05T00:21:20.523225341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:21:20.524292 containerd[1469]: time="2025-09-05T00:21:20.524252899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:21:20.528586 containerd[1469]: time="2025-09-05T00:21:20.528523814Z" level=info msg="CreateContainer within sandbox \"4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:21:20.545914 containerd[1469]: time="2025-09-05T00:21:20.545859653Z" level=info msg="CreateContainer within sandbox \"4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5949a28a8cefaab37e49916cea489107509c6dba71ce2d8561140b88d7a3a3a3\"" Sep 5 00:21:20.546481 containerd[1469]: time="2025-09-05T00:21:20.546448824Z" level=info msg="StartContainer for \"5949a28a8cefaab37e49916cea489107509c6dba71ce2d8561140b88d7a3a3a3\"" Sep 5 00:21:20.578002 systemd[1]: Started cri-containerd-5949a28a8cefaab37e49916cea489107509c6dba71ce2d8561140b88d7a3a3a3.scope - libcontainer container 5949a28a8cefaab37e49916cea489107509c6dba71ce2d8561140b88d7a3a3a3. Sep 5 00:21:20.620241 containerd[1469]: time="2025-09-05T00:21:20.620182280Z" level=info msg="StartContainer for \"5949a28a8cefaab37e49916cea489107509c6dba71ce2d8561140b88d7a3a3a3\" returns successfully" Sep 5 00:21:20.917977 systemd-networkd[1387]: calid2386624047: Gained IPv6LL Sep 5 00:21:21.321518 containerd[1469]: time="2025-09-05T00:21:21.321446142Z" level=info msg="StopPodSandbox for \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\"" Sep 5 00:21:21.321738 containerd[1469]: time="2025-09-05T00:21:21.321679877Z" level=info msg="StopPodSandbox for \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\"" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.374 [INFO][4340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.374 [INFO][4340] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" iface="eth0" netns="/var/run/netns/cni-9d525a9a-4567-7e6a-4195-f2b086ab3747" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.375 [INFO][4340] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" iface="eth0" netns="/var/run/netns/cni-9d525a9a-4567-7e6a-4195-f2b086ab3747" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.375 [INFO][4340] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" iface="eth0" netns="/var/run/netns/cni-9d525a9a-4567-7e6a-4195-f2b086ab3747" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.375 [INFO][4340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.375 [INFO][4340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.399 [INFO][4362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.400 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.400 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.406 [WARNING][4362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.406 [INFO][4362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.407 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:21.413211 containerd[1469]: 2025-09-05 00:21:21.410 [INFO][4340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:21.413670 containerd[1469]: time="2025-09-05T00:21:21.413496683Z" level=info msg="TearDown network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\" successfully" Sep 5 00:21:21.413670 containerd[1469]: time="2025-09-05T00:21:21.413524053Z" level=info msg="StopPodSandbox for \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\" returns successfully" Sep 5 00:21:21.416510 containerd[1469]: time="2025-09-05T00:21:21.416450826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-llg82,Uid:4a619295-d161-4fe3-866f-7232abcadd6e,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:21:21.416714 systemd[1]: run-netns-cni\x2d9d525a9a\x2d4567\x2d7e6a\x2d4195\x2df2b086ab3747.mount: Deactivated successfully. Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.369 [INFO][4339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.369 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" iface="eth0" netns="/var/run/netns/cni-30234dd5-f890-3a92-045e-ae1d1f3c73ff" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.369 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" iface="eth0" netns="/var/run/netns/cni-30234dd5-f890-3a92-045e-ae1d1f3c73ff" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.370 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" iface="eth0" netns="/var/run/netns/cni-30234dd5-f890-3a92-045e-ae1d1f3c73ff" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.370 [INFO][4339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.370 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.402 [INFO][4356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.402 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.407 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.565 [WARNING][4356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.565 [INFO][4356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.567 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:21.573417 containerd[1469]: 2025-09-05 00:21:21.570 [INFO][4339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:21.573992 containerd[1469]: time="2025-09-05T00:21:21.573517779Z" level=info msg="TearDown network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\" successfully" Sep 5 00:21:21.573992 containerd[1469]: time="2025-09-05T00:21:21.573544397Z" level=info msg="StopPodSandbox for \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\" returns successfully" Sep 5 00:21:21.574411 containerd[1469]: time="2025-09-05T00:21:21.574317596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ps2kf,Uid:9c21203f-d0d0-4ce5-8193-33854aaca356,Namespace:calico-system,Attempt:1,}" Sep 5 00:21:21.577767 systemd[1]: run-netns-cni\x2d30234dd5\x2df890\x2d3a92\x2d045e\x2dae1d1f3c73ff.mount: Deactivated successfully. Sep 5 00:21:21.893631 systemd-networkd[1387]: cali654ae93e9b6: Link UP Sep 5 00:21:21.894265 systemd-networkd[1387]: cali654ae93e9b6: Gained carrier Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.812 [INFO][4376] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.825 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0 calico-apiserver-575c74c6d4- calico-apiserver 4a619295-d161-4fe3-866f-7232abcadd6e 983 0 2025-09-05 00:20:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575c74c6d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-575c74c6d4-llg82 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali654ae93e9b6 [] [] }} ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.827 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.854 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" HandleID="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.854 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" HandleID="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-575c74c6d4-llg82", "timestamp":"2025-09-05 00:21:21.854404611 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.854 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.854 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.854 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.860 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.864 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.871 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.872 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.874 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.874 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.875 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0 Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.881 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.885 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.885 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" host="localhost" Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.885 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:21.909646 containerd[1469]: 2025-09-05 00:21:21.885 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" HandleID="k8s-pod-network.a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.910842 containerd[1469]: 2025-09-05 00:21:21.889 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a619295-d161-4fe3-866f-7232abcadd6e", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-575c74c6d4-llg82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali654ae93e9b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:21.910842 containerd[1469]: 2025-09-05 00:21:21.889 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.910842 containerd[1469]: 2025-09-05 00:21:21.889 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali654ae93e9b6 ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.910842 containerd[1469]: 2025-09-05 00:21:21.894 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.910842 containerd[1469]: 2025-09-05 00:21:21.894 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a619295-d161-4fe3-866f-7232abcadd6e", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0", Pod:"calico-apiserver-575c74c6d4-llg82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali654ae93e9b6", MAC:"02:57:81:db:ef:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:21.910842 containerd[1469]: 2025-09-05 00:21:21.904 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0" Namespace="calico-apiserver" Pod="calico-apiserver-575c74c6d4-llg82" WorkloadEndpoint="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:21.928367 containerd[1469]: time="2025-09-05T00:21:21.928266441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:21.928367 containerd[1469]: time="2025-09-05T00:21:21.928323725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:21.928367 containerd[1469]: time="2025-09-05T00:21:21.928345935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:21.928590 containerd[1469]: time="2025-09-05T00:21:21.928435038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:21.959968 systemd[1]: Started cri-containerd-a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0.scope - libcontainer container a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0. Sep 5 00:21:21.974886 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:21.992579 systemd-networkd[1387]: cali9e3a140bd34: Link UP Sep 5 00:21:21.993278 systemd-networkd[1387]: cali9e3a140bd34: Gained carrier Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.831 [INFO][4387] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.844 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ps2kf-eth0 csi-node-driver- calico-system 9c21203f-d0d0-4ce5-8193-33854aaca356 982 0 2025-09-05 00:20:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ps2kf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9e3a140bd34 [] [] }} ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.844 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.871 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" HandleID="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.872 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" HandleID="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ps2kf", "timestamp":"2025-09-05 00:21:21.871928012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.872 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.885 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.886 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.962 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.968 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.973 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.974 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.977 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.977 [INFO][4410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.978 [INFO][4410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.981 [INFO][4410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.987 [INFO][4410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.987 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" host="localhost" Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.987 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:22.009902 containerd[1469]: 2025-09-05 00:21:21.987 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" HandleID="k8s-pod-network.93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:22.010477 containerd[1469]: 2025-09-05 00:21:21.990 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ps2kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c21203f-d0d0-4ce5-8193-33854aaca356", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ps2kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e3a140bd34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:22.010477 containerd[1469]: 2025-09-05 00:21:21.990 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:22.010477 containerd[1469]: 2025-09-05 00:21:21.990 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e3a140bd34 ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:22.010477 containerd[1469]: 2025-09-05 00:21:21.993 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:22.010477 containerd[1469]: 2025-09-05 00:21:21.993 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ps2kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c21203f-d0d0-4ce5-8193-33854aaca356", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d", Pod:"csi-node-driver-ps2kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e3a140bd34", MAC:"e6:5a:aa:60:c3:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:22.010477 containerd[1469]: 2025-09-05 00:21:22.006 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d" Namespace="calico-system" Pod="csi-node-driver-ps2kf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:22.020946 containerd[1469]: time="2025-09-05T00:21:22.020905633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575c74c6d4-llg82,Uid:4a619295-d161-4fe3-866f-7232abcadd6e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0\"" Sep 5 00:21:22.043043 containerd[1469]: time="2025-09-05T00:21:22.042925864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:22.043183 containerd[1469]: time="2025-09-05T00:21:22.043067753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:22.043328 containerd[1469]: time="2025-09-05T00:21:22.043101464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:22.044424 containerd[1469]: time="2025-09-05T00:21:22.044279807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:22.068963 systemd[1]: Started cri-containerd-93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d.scope - libcontainer container 93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d. Sep 5 00:21:22.099340 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:22.118319 containerd[1469]: time="2025-09-05T00:21:22.118265611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ps2kf,Uid:9c21203f-d0d0-4ce5-8193-33854aaca356,Namespace:calico-system,Attempt:1,} returns sandbox id \"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d\"" Sep 5 00:21:22.394885 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:47368.service - OpenSSH per-connection server daemon (10.0.0.1:47368). Sep 5 00:21:22.445222 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 47368 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:22.447734 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:22.451928 systemd-logind[1451]: New session 11 of user core. Sep 5 00:21:22.464036 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:21:22.597228 sshd[4537]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:22.603434 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:47368.service: Deactivated successfully. Sep 5 00:21:22.605549 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:21:22.606138 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:21:22.607081 systemd-logind[1451]: Removed session 11. Sep 5 00:21:23.159009 systemd-networkd[1387]: cali9e3a140bd34: Gained IPv6LL Sep 5 00:21:23.222971 systemd-networkd[1387]: cali654ae93e9b6: Gained IPv6LL Sep 5 00:21:23.321266 containerd[1469]: time="2025-09-05T00:21:23.321209381Z" level=info msg="StopPodSandbox for \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\"" Sep 5 00:21:23.321266 containerd[1469]: time="2025-09-05T00:21:23.321243204Z" level=info msg="StopPodSandbox for \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\"" Sep 5 00:21:23.321816 containerd[1469]: time="2025-09-05T00:21:23.321632276Z" level=info msg="StopPodSandbox for \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\"" Sep 5 00:21:23.323998 containerd[1469]: time="2025-09-05T00:21:23.321209472Z" level=info msg="StopPodSandbox for \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\"" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.382 [INFO][4613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.383 [INFO][4613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" iface="eth0" netns="/var/run/netns/cni-e8783017-e8be-1b83-4374-0437ed559ffe" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.384 [INFO][4613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" iface="eth0" netns="/var/run/netns/cni-e8783017-e8be-1b83-4374-0437ed559ffe" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.385 [INFO][4613] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" iface="eth0" netns="/var/run/netns/cni-e8783017-e8be-1b83-4374-0437ed559ffe" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.385 [INFO][4613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.386 [INFO][4613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.428 [INFO][4653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.432 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.432 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.440 [WARNING][4653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.440 [INFO][4653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.444 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:23.452717 containerd[1469]: 2025-09-05 00:21:23.449 [INFO][4613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:23.455917 containerd[1469]: time="2025-09-05T00:21:23.452872500Z" level=info msg="TearDown network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\" successfully" Sep 5 00:21:23.455917 containerd[1469]: time="2025-09-05T00:21:23.455758311Z" level=info msg="StopPodSandbox for \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\" returns successfully" Sep 5 00:21:23.457049 systemd[1]: run-netns-cni\x2de8783017\x2de8be\x2d1b83\x2d4374\x2d0437ed559ffe.mount: Deactivated successfully. Sep 5 00:21:23.457352 containerd[1469]: time="2025-09-05T00:21:23.457206240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-k7qc4,Uid:ecf2b887-548a-428c-a0ae-a36f75934ba9,Namespace:calico-system,Attempt:1,}" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.397 [INFO][4616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.397 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" iface="eth0" netns="/var/run/netns/cni-05268f01-d9dc-4f3e-0a37-b131013170de" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.397 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" iface="eth0" netns="/var/run/netns/cni-05268f01-d9dc-4f3e-0a37-b131013170de" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.397 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" iface="eth0" netns="/var/run/netns/cni-05268f01-d9dc-4f3e-0a37-b131013170de" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.398 [INFO][4616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.398 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.446 [INFO][4660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.446 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.446 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.452 [WARNING][4660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.456 [INFO][4660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.457 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:23.470209 containerd[1469]: 2025-09-05 00:21:23.464 [INFO][4616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:23.470643 containerd[1469]: time="2025-09-05T00:21:23.470436691Z" level=info msg="TearDown network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\" successfully" Sep 5 00:21:23.470643 containerd[1469]: time="2025-09-05T00:21:23.470461286Z" level=info msg="StopPodSandbox for \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\" returns successfully" Sep 5 00:21:23.474864 containerd[1469]: time="2025-09-05T00:21:23.471307955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db846b456-d8kdg,Uid:e434d84b-5a4e-4e9f-be35-307612a171bb,Namespace:calico-system,Attempt:1,}" Sep 5 00:21:23.475011 systemd[1]: run-netns-cni\x2d05268f01\x2dd9dc\x2d4f3e\x2d0a37\x2db131013170de.mount: Deactivated successfully. Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.416 [INFO][4626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.417 [INFO][4626] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" iface="eth0" netns="/var/run/netns/cni-8d890e66-484a-f436-fb91-0636d656cfc8" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.417 [INFO][4626] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" iface="eth0" netns="/var/run/netns/cni-8d890e66-484a-f436-fb91-0636d656cfc8" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.417 [INFO][4626] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" iface="eth0" netns="/var/run/netns/cni-8d890e66-484a-f436-fb91-0636d656cfc8" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.417 [INFO][4626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.417 [INFO][4626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.454 [INFO][4668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.456 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.458 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.463 [WARNING][4668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.463 [INFO][4668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.465 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:23.480187 containerd[1469]: 2025-09-05 00:21:23.469 [INFO][4626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:23.480895 containerd[1469]: time="2025-09-05T00:21:23.480866406Z" level=info msg="TearDown network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\" successfully" Sep 5 00:21:23.480939 containerd[1469]: time="2025-09-05T00:21:23.480892965Z" level=info msg="StopPodSandbox for \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\" returns successfully" Sep 5 00:21:23.481804 kubelet[2562]: E0905 00:21:23.481576 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:23.488938 containerd[1469]: time="2025-09-05T00:21:23.485639571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lntfn,Uid:4cebde53-a50f-40b3-8246-ec9a25623456,Namespace:kube-system,Attempt:1,}" Sep 5 00:21:23.487155 systemd[1]: run-netns-cni\x2d8d890e66\x2d484a\x2df436\x2dfb91\x2d0636d656cfc8.mount: Deactivated successfully. Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.438 [INFO][4639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.438 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" iface="eth0" netns="/var/run/netns/cni-5d9b851a-93ac-07f2-329c-ade8b45778c6" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.438 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" iface="eth0" netns="/var/run/netns/cni-5d9b851a-93ac-07f2-329c-ade8b45778c6" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.438 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" iface="eth0" netns="/var/run/netns/cni-5d9b851a-93ac-07f2-329c-ade8b45778c6" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.438 [INFO][4639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.438 [INFO][4639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.483 [INFO][4678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.484 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.484 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.494 [WARNING][4678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.494 [INFO][4678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.498 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:23.512952 containerd[1469]: 2025-09-05 00:21:23.502 [INFO][4639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:23.513352 containerd[1469]: time="2025-09-05T00:21:23.513265493Z" level=info msg="TearDown network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\" successfully" Sep 5 00:21:23.513352 containerd[1469]: time="2025-09-05T00:21:23.513287874Z" level=info msg="StopPodSandbox for \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\" returns successfully" Sep 5 00:21:23.513827 kubelet[2562]: E0905 00:21:23.513767 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:23.514365 containerd[1469]: time="2025-09-05T00:21:23.514315925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6n82,Uid:e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3,Namespace:kube-system,Attempt:1,}" Sep 5 00:21:23.629740 systemd-networkd[1387]: cali33bb3336e74: Link UP Sep 5 00:21:23.630030 systemd-networkd[1387]: cali33bb3336e74: Gained carrier Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.534 [INFO][4699] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.548 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0 calico-kube-controllers-7db846b456- calico-system e434d84b-5a4e-4e9f-be35-307612a171bb 1008 0 2025-09-05 00:20:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7db846b456 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7db846b456-d8kdg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali33bb3336e74 [] [] }} ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.548 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.584 [INFO][4748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" HandleID="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.584 [INFO][4748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" HandleID="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7db846b456-d8kdg", "timestamp":"2025-09-05 00:21:23.584403939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.584 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.585 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.585 [INFO][4748] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.593 [INFO][4748] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.599 [INFO][4748] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.603 [INFO][4748] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.605 [INFO][4748] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.607 [INFO][4748] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.607 [INFO][4748] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.609 [INFO][4748] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3 Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.615 [INFO][4748] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.620 [INFO][4748] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.620 [INFO][4748] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" host="localhost" Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.620 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:23.650674 containerd[1469]: 2025-09-05 00:21:23.621 [INFO][4748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" HandleID="k8s-pod-network.7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.651511 containerd[1469]: 2025-09-05 00:21:23.625 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0", GenerateName:"calico-kube-controllers-7db846b456-", Namespace:"calico-system", SelfLink:"", UID:"e434d84b-5a4e-4e9f-be35-307612a171bb", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db846b456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7db846b456-d8kdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33bb3336e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:23.651511 containerd[1469]: 2025-09-05 00:21:23.625 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.651511 containerd[1469]: 2025-09-05 00:21:23.625 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33bb3336e74 ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.651511 containerd[1469]: 2025-09-05 00:21:23.629 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.651511 containerd[1469]: 2025-09-05 00:21:23.631 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0", GenerateName:"calico-kube-controllers-7db846b456-", Namespace:"calico-system", SelfLink:"", UID:"e434d84b-5a4e-4e9f-be35-307612a171bb", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db846b456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3", Pod:"calico-kube-controllers-7db846b456-d8kdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33bb3336e74", MAC:"4a:bd:20:21:ba:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:23.651511 containerd[1469]: 2025-09-05 00:21:23.648 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3" Namespace="calico-system" Pod="calico-kube-controllers-7db846b456-d8kdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:23.790100 systemd[1]: run-netns-cni\x2d5d9b851a\x2d93ac\x2d07f2\x2d329c\x2dade8b45778c6.mount: Deactivated successfully. Sep 5 00:21:23.820194 systemd-networkd[1387]: cali59ee6f6c436: Link UP Sep 5 00:21:23.822038 systemd-networkd[1387]: cali59ee6f6c436: Gained carrier Sep 5 00:21:23.828459 containerd[1469]: time="2025-09-05T00:21:23.825907423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:23.828459 containerd[1469]: time="2025-09-05T00:21:23.825978573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:23.828459 containerd[1469]: time="2025-09-05T00:21:23.825992789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:23.828459 containerd[1469]: time="2025-09-05T00:21:23.826089305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.524 [INFO][4688] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.545 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--k7qc4-eth0 goldmane-54d579b49d- calico-system ecf2b887-548a-428c-a0ae-a36f75934ba9 1007 0 2025-09-05 00:20:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-k7qc4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali59ee6f6c436 [] [] }} ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.546 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.587 [INFO][4747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" HandleID="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.587 [INFO][4747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" HandleID="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-k7qc4", "timestamp":"2025-09-05 00:21:23.586977608 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.587 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.621 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.621 [INFO][4747] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.694 [INFO][4747] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.791 [INFO][4747] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.796 [INFO][4747] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.798 [INFO][4747] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.801 [INFO][4747] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.801 [INFO][4747] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.802 [INFO][4747] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.807 [INFO][4747] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.812 [INFO][4747] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.813 [INFO][4747] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" host="localhost" Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.813 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:23.841273 containerd[1469]: 2025-09-05 00:21:23.813 [INFO][4747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" HandleID="k8s-pod-network.8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.842250 containerd[1469]: 2025-09-05 00:21:23.817 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--k7qc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ecf2b887-548a-428c-a0ae-a36f75934ba9", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-k7qc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59ee6f6c436", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:23.842250 containerd[1469]: 2025-09-05 00:21:23.817 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.842250 containerd[1469]: 2025-09-05 00:21:23.817 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59ee6f6c436 ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.842250 containerd[1469]: 2025-09-05 00:21:23.821 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.842250 containerd[1469]: 2025-09-05 00:21:23.821 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--k7qc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ecf2b887-548a-428c-a0ae-a36f75934ba9", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a", Pod:"goldmane-54d579b49d-k7qc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59ee6f6c436", MAC:"be:2e:81:49:04:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:23.842250 containerd[1469]: 2025-09-05 00:21:23.837 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a" Namespace="calico-system" Pod="goldmane-54d579b49d-k7qc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:23.861006 systemd[1]: Started cri-containerd-7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3.scope - libcontainer container 7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3. Sep 5 00:21:23.876693 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:23.913600 containerd[1469]: time="2025-09-05T00:21:23.913522609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db846b456-d8kdg,Uid:e434d84b-5a4e-4e9f-be35-307612a171bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3\"" Sep 5 00:21:24.001238 containerd[1469]: time="2025-09-05T00:21:24.000736230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:24.002209 containerd[1469]: time="2025-09-05T00:21:24.001274246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:24.002209 containerd[1469]: time="2025-09-05T00:21:24.001311344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:24.002209 containerd[1469]: time="2025-09-05T00:21:24.001462702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:24.012548 kubelet[2562]: I0905 00:21:24.012503 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:21:24.013485 kubelet[2562]: E0905 00:21:24.012956 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:24.043548 systemd[1]: Started cri-containerd-8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a.scope - libcontainer container 8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a. Sep 5 00:21:24.052376 systemd-networkd[1387]: cali1bfa36f353a: Link UP Sep 5 00:21:24.055059 systemd-networkd[1387]: cali1bfa36f353a: Gained carrier Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.553 [INFO][4714] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.568 [INFO][4714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lntfn-eth0 coredns-674b8bbfcf- kube-system 4cebde53-a50f-40b3-8246-ec9a25623456 1009 0 2025-09-05 00:20:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lntfn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1bfa36f353a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.568 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.607 [INFO][4763] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" HandleID="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.607 [INFO][4763] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" HandleID="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a04b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lntfn", "timestamp":"2025-09-05 00:21:23.607650727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.608 [INFO][4763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.813 [INFO][4763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.813 [INFO][4763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.826 [INFO][4763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.977 [INFO][4763] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.984 [INFO][4763] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.990 [INFO][4763] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.997 [INFO][4763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:23.997 [INFO][4763] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:24.002 [INFO][4763] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:24.011 [INFO][4763] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:24.021 [INFO][4763] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:24.026 [INFO][4763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" host="localhost" Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:24.027 [INFO][4763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:24.077490 containerd[1469]: 2025-09-05 00:21:24.027 [INFO][4763] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" HandleID="k8s-pod-network.b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:24.078202 containerd[1469]: 2025-09-05 00:21:24.041 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lntfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4cebde53-a50f-40b3-8246-ec9a25623456", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lntfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bfa36f353a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:24.078202 containerd[1469]: 2025-09-05 00:21:24.043 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:24.078202 containerd[1469]: 2025-09-05 00:21:24.043 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bfa36f353a ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:24.078202 containerd[1469]: 2025-09-05 00:21:24.056 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:24.078202 containerd[1469]: 2025-09-05 00:21:24.057 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lntfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4cebde53-a50f-40b3-8246-ec9a25623456", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f", Pod:"coredns-674b8bbfcf-lntfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bfa36f353a", MAC:"ea:98:20:61:0f:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:24.078202 containerd[1469]: 2025-09-05 00:21:24.073 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lntfn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:24.085339 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:24.089484 containerd[1469]: time="2025-09-05T00:21:24.089440084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:24.091673 containerd[1469]: time="2025-09-05T00:21:24.091634788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:21:24.092721 containerd[1469]: time="2025-09-05T00:21:24.092680245Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:24.100828 containerd[1469]: time="2025-09-05T00:21:24.100755131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:24.103597 containerd[1469]: time="2025-09-05T00:21:24.103430637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.579142724s" Sep 5 00:21:24.103597 containerd[1469]: time="2025-09-05T00:21:24.103579861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:21:24.107509 containerd[1469]: time="2025-09-05T00:21:24.107476075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:21:24.107905 containerd[1469]: time="2025-09-05T00:21:24.107215527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:24.107905 containerd[1469]: time="2025-09-05T00:21:24.107292959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:24.107905 containerd[1469]: time="2025-09-05T00:21:24.107303468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:24.107905 containerd[1469]: time="2025-09-05T00:21:24.107388684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:24.110300 containerd[1469]: time="2025-09-05T00:21:24.110253727Z" level=info msg="CreateContainer within sandbox \"84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:21:24.127258 containerd[1469]: time="2025-09-05T00:21:24.127193117Z" level=info msg="CreateContainer within sandbox \"84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"697c4e50c91344901c03779311b053b88b132772a5b1677d995ba1832dd68488\"" Sep 5 00:21:24.129312 containerd[1469]: time="2025-09-05T00:21:24.129153171Z" level=info msg="StartContainer for \"697c4e50c91344901c03779311b053b88b132772a5b1677d995ba1832dd68488\"" Sep 5 00:21:24.137918 systemd[1]: Started cri-containerd-b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f.scope - libcontainer container b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f. Sep 5 00:21:24.147275 containerd[1469]: time="2025-09-05T00:21:24.147224105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-k7qc4,Uid:ecf2b887-548a-428c-a0ae-a36f75934ba9,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a\"" Sep 5 00:21:24.162687 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:24.168245 systemd-networkd[1387]: cali1b8478697b7: Link UP Sep 5 00:21:24.175077 systemd-networkd[1387]: cali1b8478697b7: Gained carrier Sep 5 00:21:24.177237 systemd[1]: Started cri-containerd-697c4e50c91344901c03779311b053b88b132772a5b1677d995ba1832dd68488.scope - libcontainer container 697c4e50c91344901c03779311b053b88b132772a5b1677d995ba1832dd68488. Sep 5 00:21:24.196424 containerd[1469]: time="2025-09-05T00:21:24.196374336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lntfn,Uid:4cebde53-a50f-40b3-8246-ec9a25623456,Namespace:kube-system,Attempt:1,} returns sandbox id \"b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f\"" Sep 5 00:21:24.197806 kubelet[2562]: E0905 00:21:24.197704 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:23.572 [INFO][4728] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:23.584 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--f6n82-eth0 coredns-674b8bbfcf- kube-system e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3 1010 0 2025-09-05 00:20:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-f6n82 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b8478697b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:23.585 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:23.632 [INFO][4773] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" HandleID="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:23.632 [INFO][4773] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" HandleID="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-f6n82", "timestamp":"2025-09-05 00:21:23.632646247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:23.633 [INFO][4773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.026 [INFO][4773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.026 [INFO][4773] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.049 [INFO][4773] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.094 [INFO][4773] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.100 [INFO][4773] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.103 [INFO][4773] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.105 [INFO][4773] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.106 [INFO][4773] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.110 [INFO][4773] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50 Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.122 [INFO][4773] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.137 [INFO][4773] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.137 [INFO][4773] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" host="localhost" Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.137 [INFO][4773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:24.203883 containerd[1469]: 2025-09-05 00:21:24.138 [INFO][4773] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" HandleID="k8s-pod-network.5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:24.204489 containerd[1469]: 2025-09-05 00:21:24.155 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f6n82-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-f6n82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b8478697b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:24.204489 containerd[1469]: 2025-09-05 00:21:24.155 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:24.204489 containerd[1469]: 2025-09-05 00:21:24.155 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b8478697b7 ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:24.204489 containerd[1469]: 2025-09-05 00:21:24.180 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:24.204489 containerd[1469]: 2025-09-05 00:21:24.180 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f6n82-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50", Pod:"coredns-674b8bbfcf-f6n82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b8478697b7", MAC:"7a:4e:df:33:68:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:24.204489 containerd[1469]: 2025-09-05 00:21:24.195 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6n82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:24.206432 containerd[1469]: time="2025-09-05T00:21:24.206386804Z" level=info msg="CreateContainer within sandbox \"b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:21:24.237200 containerd[1469]: time="2025-09-05T00:21:24.236890024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:24.237200 containerd[1469]: time="2025-09-05T00:21:24.236948882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:24.237200 containerd[1469]: time="2025-09-05T00:21:24.236978867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:24.237200 containerd[1469]: time="2025-09-05T00:21:24.237092846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:24.260736 systemd[1]: Started cri-containerd-5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50.scope - libcontainer container 5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50. Sep 5 00:21:24.282619 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:21:24.333453 containerd[1469]: time="2025-09-05T00:21:24.333176421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6n82,Uid:e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3,Namespace:kube-system,Attempt:1,} returns sandbox id \"5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50\"" Sep 5 00:21:24.334016 containerd[1469]: time="2025-09-05T00:21:24.333478987Z" level=info msg="StartContainer for \"697c4e50c91344901c03779311b053b88b132772a5b1677d995ba1832dd68488\" returns successfully" Sep 5 00:21:24.336351 containerd[1469]: time="2025-09-05T00:21:24.336315808Z" level=info msg="CreateContainer within sandbox \"b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01bd804d7f1e26e85c64987f05916bd88d78228b5288332747a08329b3f873ee\"" Sep 5 00:21:24.339627 kubelet[2562]: E0905 00:21:24.339366 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:24.342837 containerd[1469]: time="2025-09-05T00:21:24.342571088Z" level=info msg="StartContainer for \"01bd804d7f1e26e85c64987f05916bd88d78228b5288332747a08329b3f873ee\"" Sep 5 00:21:24.349998 containerd[1469]: time="2025-09-05T00:21:24.349935090Z" level=info msg="CreateContainer within sandbox \"5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:21:24.380667 containerd[1469]: time="2025-09-05T00:21:24.380517205Z" level=info msg="CreateContainer within sandbox \"5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e5a96fe9899aa0b18c93073e1fde7ca0fb0f7d70db496e6264863c6b5dd44d1\"" Sep 5 00:21:24.383114 containerd[1469]: time="2025-09-05T00:21:24.383071889Z" level=info msg="StartContainer for \"0e5a96fe9899aa0b18c93073e1fde7ca0fb0f7d70db496e6264863c6b5dd44d1\"" Sep 5 00:21:24.406058 systemd[1]: Started cri-containerd-01bd804d7f1e26e85c64987f05916bd88d78228b5288332747a08329b3f873ee.scope - libcontainer container 01bd804d7f1e26e85c64987f05916bd88d78228b5288332747a08329b3f873ee. Sep 5 00:21:24.416934 systemd[1]: Started cri-containerd-0e5a96fe9899aa0b18c93073e1fde7ca0fb0f7d70db496e6264863c6b5dd44d1.scope - libcontainer container 0e5a96fe9899aa0b18c93073e1fde7ca0fb0f7d70db496e6264863c6b5dd44d1. Sep 5 00:21:24.757979 containerd[1469]: time="2025-09-05T00:21:24.757920219Z" level=info msg="StartContainer for \"0e5a96fe9899aa0b18c93073e1fde7ca0fb0f7d70db496e6264863c6b5dd44d1\" returns successfully" Sep 5 00:21:24.758217 containerd[1469]: time="2025-09-05T00:21:24.757928203Z" level=info msg="StartContainer for \"01bd804d7f1e26e85c64987f05916bd88d78228b5288332747a08329b3f873ee\" returns successfully" Sep 5 00:21:24.962906 kubelet[2562]: E0905 00:21:24.962857 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:24.972359 kubelet[2562]: E0905 00:21:24.967556 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:24.972359 kubelet[2562]: E0905 00:21:24.967862 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:24.991185 kubelet[2562]: I0905 00:21:24.990592 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-575c74c6d4-v8wf5" podStartSLOduration=28.537241343 podStartE2EDuration="32.990572583s" podCreationTimestamp="2025-09-05 00:20:52 +0000 UTC" firstStartedPulling="2025-09-05 00:21:19.651711843 +0000 UTC m=+43.433404574" lastFinishedPulling="2025-09-05 00:21:24.105043083 +0000 UTC m=+47.886735814" observedRunningTime="2025-09-05 00:21:24.97391505 +0000 UTC m=+48.755607781" watchObservedRunningTime="2025-09-05 00:21:24.990572583 +0000 UTC m=+48.772265314" Sep 5 00:21:25.012757 kubelet[2562]: I0905 00:21:25.012592 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f6n82" podStartSLOduration=44.012575298 podStartE2EDuration="44.012575298s" podCreationTimestamp="2025-09-05 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:25.011408755 +0000 UTC m=+48.793101486" watchObservedRunningTime="2025-09-05 00:21:25.012575298 +0000 UTC m=+48.794268029" Sep 5 00:21:25.012757 kubelet[2562]: I0905 00:21:25.012735 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lntfn" podStartSLOduration=44.012729822 podStartE2EDuration="44.012729822s" podCreationTimestamp="2025-09-05 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:24.988730435 +0000 UTC m=+48.770423176" watchObservedRunningTime="2025-09-05 00:21:25.012729822 +0000 UTC m=+48.794422553" Sep 5 00:21:25.218827 kernel: bpftool[5163]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:21:25.270225 systemd-networkd[1387]: cali33bb3336e74: Gained IPv6LL Sep 5 00:21:25.335062 systemd-networkd[1387]: cali59ee6f6c436: Gained IPv6LL Sep 5 00:21:25.479707 systemd-networkd[1387]: vxlan.calico: Link UP Sep 5 00:21:25.479729 systemd-networkd[1387]: vxlan.calico: Gained carrier Sep 5 00:21:25.590698 systemd-networkd[1387]: cali1b8478697b7: Gained IPv6LL Sep 5 00:21:25.846088 systemd-networkd[1387]: cali1bfa36f353a: Gained IPv6LL Sep 5 00:21:25.969163 kubelet[2562]: E0905 00:21:25.969105 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:25.969807 kubelet[2562]: E0905 00:21:25.969167 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:26.689547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069936744.mount: Deactivated successfully. Sep 5 00:21:26.709634 containerd[1469]: time="2025-09-05T00:21:26.709573847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:26.710550 containerd[1469]: time="2025-09-05T00:21:26.710481208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:21:26.712264 containerd[1469]: time="2025-09-05T00:21:26.712226720Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:26.714569 containerd[1469]: time="2025-09-05T00:21:26.714518668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:26.715231 containerd[1469]: time="2025-09-05T00:21:26.715190966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.607683965s" Sep 5 00:21:26.715231 containerd[1469]: time="2025-09-05T00:21:26.715222264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:21:26.716249 containerd[1469]: time="2025-09-05T00:21:26.716214540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:21:26.720359 containerd[1469]: time="2025-09-05T00:21:26.720332269Z" level=info msg="CreateContainer within sandbox \"4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:21:26.734065 containerd[1469]: time="2025-09-05T00:21:26.734024871Z" level=info msg="CreateContainer within sandbox \"4a01cbedb9b991cbc9652ae2145d716b3c0fa680914943d942fd565809bf5600\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d79f72336d0255c76406b70cea20396d3733ecac5065b7b4dc8b0fabe900b6bf\"" Sep 5 00:21:26.734587 containerd[1469]: time="2025-09-05T00:21:26.734557301Z" level=info msg="StartContainer for \"d79f72336d0255c76406b70cea20396d3733ecac5065b7b4dc8b0fabe900b6bf\"" Sep 5 00:21:26.760943 systemd[1]: Started cri-containerd-d79f72336d0255c76406b70cea20396d3733ecac5065b7b4dc8b0fabe900b6bf.scope - libcontainer container d79f72336d0255c76406b70cea20396d3733ecac5065b7b4dc8b0fabe900b6bf. Sep 5 00:21:26.805823 containerd[1469]: time="2025-09-05T00:21:26.805771327Z" level=info msg="StartContainer for \"d79f72336d0255c76406b70cea20396d3733ecac5065b7b4dc8b0fabe900b6bf\" returns successfully" Sep 5 00:21:26.933957 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Sep 5 00:21:26.972327 kubelet[2562]: E0905 00:21:26.972014 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:21:26.982901 kubelet[2562]: I0905 00:21:26.982799 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-755bcd44d6-97wb8" podStartSLOduration=2.193826339 podStartE2EDuration="9.982766472s" podCreationTimestamp="2025-09-05 00:21:17 +0000 UTC" firstStartedPulling="2025-09-05 00:21:18.927135512 +0000 UTC m=+42.708828243" lastFinishedPulling="2025-09-05 00:21:26.716075645 +0000 UTC m=+50.497768376" observedRunningTime="2025-09-05 00:21:26.980663882 +0000 UTC m=+50.762356613" watchObservedRunningTime="2025-09-05 00:21:26.982766472 +0000 UTC m=+50.764459203" Sep 5 00:21:27.069542 containerd[1469]: time="2025-09-05T00:21:27.069486147Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:27.071074 containerd[1469]: time="2025-09-05T00:21:27.071025555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:21:27.072999 containerd[1469]: time="2025-09-05T00:21:27.072965182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 356.718502ms" Sep 5 00:21:27.072999 containerd[1469]: time="2025-09-05T00:21:27.072991310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:21:27.073920 containerd[1469]: time="2025-09-05T00:21:27.073843211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:21:27.078835 containerd[1469]: time="2025-09-05T00:21:27.078802425Z" level=info msg="CreateContainer within sandbox \"a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:21:27.096860 containerd[1469]: time="2025-09-05T00:21:27.096824536Z" level=info msg="CreateContainer within sandbox \"a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e0d2e243719bfa616423297a5d85e8a522ed6a5ac4ef7baed36293df4b080f89\"" Sep 5 00:21:27.097272 containerd[1469]: time="2025-09-05T00:21:27.097248128Z" level=info msg="StartContainer for \"e0d2e243719bfa616423297a5d85e8a522ed6a5ac4ef7baed36293df4b080f89\"" Sep 5 00:21:27.137960 systemd[1]: Started cri-containerd-e0d2e243719bfa616423297a5d85e8a522ed6a5ac4ef7baed36293df4b080f89.scope - libcontainer container e0d2e243719bfa616423297a5d85e8a522ed6a5ac4ef7baed36293df4b080f89. Sep 5 00:21:27.183458 containerd[1469]: time="2025-09-05T00:21:27.183399756Z" level=info msg="StartContainer for \"e0d2e243719bfa616423297a5d85e8a522ed6a5ac4ef7baed36293df4b080f89\" returns successfully" Sep 5 00:21:27.613306 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:47384.service - OpenSSH per-connection server daemon (10.0.0.1:47384). Sep 5 00:21:27.671913 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 47384 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:27.674498 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:27.680212 systemd-logind[1451]: New session 12 of user core. Sep 5 00:21:27.685958 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:21:27.829354 sshd[5341]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:27.834299 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:47384.service: Deactivated successfully. Sep 5 00:21:27.836567 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:21:27.839996 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:21:27.840987 systemd-logind[1451]: Removed session 12. Sep 5 00:21:28.942904 containerd[1469]: time="2025-09-05T00:21:28.942858117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:28.943636 containerd[1469]: time="2025-09-05T00:21:28.943586272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:21:28.944705 containerd[1469]: time="2025-09-05T00:21:28.944670926Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:28.946791 containerd[1469]: time="2025-09-05T00:21:28.946741350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:28.947362 containerd[1469]: time="2025-09-05T00:21:28.947331701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.873457752s" Sep 5 00:21:28.947362 containerd[1469]: time="2025-09-05T00:21:28.947360173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:21:28.948268 containerd[1469]: time="2025-09-05T00:21:28.948245188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:21:28.953618 containerd[1469]: time="2025-09-05T00:21:28.953586133Z" level=info msg="CreateContainer within sandbox \"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:21:28.972147 containerd[1469]: time="2025-09-05T00:21:28.972094011Z" level=info msg="CreateContainer within sandbox \"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c0f46e69b15de43f2a8e32baad195b443b9ed2b47d0dc67e7ba223e3fd947ba\"" Sep 5 00:21:28.972685 containerd[1469]: time="2025-09-05T00:21:28.972615394Z" level=info msg="StartContainer for \"0c0f46e69b15de43f2a8e32baad195b443b9ed2b47d0dc67e7ba223e3fd947ba\"" Sep 5 00:21:28.979654 kubelet[2562]: I0905 00:21:28.979616 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:21:29.014932 systemd[1]: Started cri-containerd-0c0f46e69b15de43f2a8e32baad195b443b9ed2b47d0dc67e7ba223e3fd947ba.scope - libcontainer container 0c0f46e69b15de43f2a8e32baad195b443b9ed2b47d0dc67e7ba223e3fd947ba. Sep 5 00:21:29.044426 containerd[1469]: time="2025-09-05T00:21:29.044387683Z" level=info msg="StartContainer for \"0c0f46e69b15de43f2a8e32baad195b443b9ed2b47d0dc67e7ba223e3fd947ba\" returns successfully" Sep 5 00:21:32.841114 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:43928.service - OpenSSH per-connection server daemon (10.0.0.1:43928). Sep 5 00:21:33.128476 containerd[1469]: time="2025-09-05T00:21:33.128415842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:33.129908 containerd[1469]: time="2025-09-05T00:21:33.129863127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:21:33.131605 containerd[1469]: time="2025-09-05T00:21:33.131526273Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:33.134712 containerd[1469]: time="2025-09-05T00:21:33.134640762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:33.135616 containerd[1469]: time="2025-09-05T00:21:33.135581223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.187310829s" Sep 5 00:21:33.135680 containerd[1469]: time="2025-09-05T00:21:33.135618462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:21:33.137698 containerd[1469]: time="2025-09-05T00:21:33.137524541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:21:33.160962 containerd[1469]: time="2025-09-05T00:21:33.160920613Z" level=info msg="CreateContainer within sandbox \"7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:21:33.172877 containerd[1469]: time="2025-09-05T00:21:33.172829887Z" level=info msg="CreateContainer within sandbox \"7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b5d322e806965a5871f801a321e771cea44e2cacc077e24ff24f5bf71727e2b3\"" Sep 5 00:21:33.173550 containerd[1469]: time="2025-09-05T00:21:33.173524039Z" level=info msg="StartContainer for \"b5d322e806965a5871f801a321e771cea44e2cacc077e24ff24f5bf71727e2b3\"" Sep 5 00:21:33.181719 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 43928 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:33.183836 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:33.194284 systemd-logind[1451]: New session 13 of user core. Sep 5 00:21:33.204931 systemd[1]: Started cri-containerd-b5d322e806965a5871f801a321e771cea44e2cacc077e24ff24f5bf71727e2b3.scope - libcontainer container b5d322e806965a5871f801a321e771cea44e2cacc077e24ff24f5bf71727e2b3. Sep 5 00:21:33.206169 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:21:33.254307 containerd[1469]: time="2025-09-05T00:21:33.254249594Z" level=info msg="StartContainer for \"b5d322e806965a5871f801a321e771cea44e2cacc077e24ff24f5bf71727e2b3\" returns successfully" Sep 5 00:21:33.399676 sshd[5408]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:33.410214 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:43928.service: Deactivated successfully. Sep 5 00:21:33.412047 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:21:33.412761 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:21:33.420091 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:43934.service - OpenSSH per-connection server daemon (10.0.0.1:43934). Sep 5 00:21:33.420714 systemd-logind[1451]: Removed session 13. Sep 5 00:21:33.449224 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 43934 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:33.451039 sshd[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:33.455317 systemd-logind[1451]: New session 14 of user core. Sep 5 00:21:33.462956 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:21:33.636948 sshd[5472]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:33.648869 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:43934.service: Deactivated successfully. Sep 5 00:21:33.652501 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:21:33.653804 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:21:33.664206 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:43946.service - OpenSSH per-connection server daemon (10.0.0.1:43946). Sep 5 00:21:33.665293 systemd-logind[1451]: Removed session 14. Sep 5 00:21:33.704407 sshd[5484]: Accepted publickey for core from 10.0.0.1 port 43946 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:33.706305 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:33.711047 systemd-logind[1451]: New session 15 of user core. Sep 5 00:21:33.717951 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:21:33.843598 sshd[5484]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:33.848884 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:43946.service: Deactivated successfully. Sep 5 00:21:33.851640 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:21:33.852443 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:21:33.853431 systemd-logind[1451]: Removed session 15. Sep 5 00:21:34.009430 kubelet[2562]: I0905 00:21:34.009371 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-575c74c6d4-llg82" podStartSLOduration=36.957891958 podStartE2EDuration="42.009355591s" podCreationTimestamp="2025-09-05 00:20:52 +0000 UTC" firstStartedPulling="2025-09-05 00:21:22.022186722 +0000 UTC m=+45.803879453" lastFinishedPulling="2025-09-05 00:21:27.073650355 +0000 UTC m=+50.855343086" observedRunningTime="2025-09-05 00:21:28.003156385 +0000 UTC m=+51.784849116" watchObservedRunningTime="2025-09-05 00:21:34.009355591 +0000 UTC m=+57.791048322" Sep 5 00:21:34.009974 kubelet[2562]: I0905 00:21:34.009464 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7db846b456-d8kdg" podStartSLOduration=30.788663898 podStartE2EDuration="40.009459415s" podCreationTimestamp="2025-09-05 00:20:54 +0000 UTC" firstStartedPulling="2025-09-05 00:21:23.91586223 +0000 UTC m=+47.697554962" lastFinishedPulling="2025-09-05 00:21:33.136657748 +0000 UTC m=+56.918350479" observedRunningTime="2025-09-05 00:21:34.008884823 +0000 UTC m=+57.790577554" watchObservedRunningTime="2025-09-05 00:21:34.009459415 +0000 UTC m=+57.791152146" Sep 5 00:21:35.712241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467816927.mount: Deactivated successfully. Sep 5 00:21:36.300434 containerd[1469]: time="2025-09-05T00:21:36.300261219Z" level=info msg="StopPodSandbox for \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\"" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.408 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a619295-d161-4fe3-866f-7232abcadd6e", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0", Pod:"calico-apiserver-575c74c6d4-llg82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali654ae93e9b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.409 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.409 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" iface="eth0" netns="" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.409 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.409 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.432 [INFO][5550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.433 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.433 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.438 [WARNING][5550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.438 [INFO][5550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.440 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:36.446801 containerd[1469]: 2025-09-05 00:21:36.443 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.456062 containerd[1469]: time="2025-09-05T00:21:36.456011115Z" level=info msg="TearDown network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\" successfully" Sep 5 00:21:36.456062 containerd[1469]: time="2025-09-05T00:21:36.456052383Z" level=info msg="StopPodSandbox for \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\" returns successfully" Sep 5 00:21:36.497794 containerd[1469]: time="2025-09-05T00:21:36.497717464Z" level=info msg="RemovePodSandbox for \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\"" Sep 5 00:21:36.499922 containerd[1469]: time="2025-09-05T00:21:36.499885387Z" level=info msg="Forcibly stopping sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\"" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.568 [WARNING][5568] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a619295-d161-4fe3-866f-7232abcadd6e", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4c77a76a93ef40c30da93327982f740b25250ccf172f35f4bd952b11c88b4f0", Pod:"calico-apiserver-575c74c6d4-llg82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali654ae93e9b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.568 [INFO][5568] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.568 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" iface="eth0" netns="" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.568 [INFO][5568] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.568 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.600 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.600 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.600 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.609 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.609 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" HandleID="k8s-pod-network.86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Workload="localhost-k8s-calico--apiserver--575c74c6d4--llg82-eth0" Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.611 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:36.617858 containerd[1469]: 2025-09-05 00:21:36.614 [INFO][5568] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534" Sep 5 00:21:36.638474 containerd[1469]: time="2025-09-05T00:21:36.617850257Z" level=info msg="TearDown network for sandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\" successfully" Sep 5 00:21:36.746187 containerd[1469]: time="2025-09-05T00:21:36.746128850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:36.746345 containerd[1469]: time="2025-09-05T00:21:36.746226342Z" level=info msg="RemovePodSandbox \"86670715bdb2585d13885bd5e4c3f77fb1d7d94e1640d9e3df6e9e4f03c23534\" returns successfully" Sep 5 00:21:36.753649 containerd[1469]: time="2025-09-05T00:21:36.753278739Z" level=info msg="StopPodSandbox for \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\"" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.803 [WARNING][5598] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--k7qc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ecf2b887-548a-428c-a0ae-a36f75934ba9", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a", Pod:"goldmane-54d579b49d-k7qc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59ee6f6c436", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.805 [INFO][5598] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.805 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" iface="eth0" netns="" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.805 [INFO][5598] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.805 [INFO][5598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.839 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.839 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.839 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.848 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.848 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.851 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:36.858579 containerd[1469]: 2025-09-05 00:21:36.855 [INFO][5598] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.859235 containerd[1469]: time="2025-09-05T00:21:36.858613143Z" level=info msg="TearDown network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\" successfully" Sep 5 00:21:36.859235 containerd[1469]: time="2025-09-05T00:21:36.858655261Z" level=info msg="StopPodSandbox for \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\" returns successfully" Sep 5 00:21:36.859235 containerd[1469]: time="2025-09-05T00:21:36.859196423Z" level=info msg="RemovePodSandbox for \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\"" Sep 5 00:21:36.859235 containerd[1469]: time="2025-09-05T00:21:36.859225928Z" level=info msg="Forcibly stopping sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\"" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.904 [WARNING][5623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--k7qc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ecf2b887-548a-428c-a0ae-a36f75934ba9", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a", Pod:"goldmane-54d579b49d-k7qc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59ee6f6c436", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.904 [INFO][5623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.904 [INFO][5623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" iface="eth0" netns="" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.904 [INFO][5623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.904 [INFO][5623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.949 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.949 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.949 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.957 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.957 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" HandleID="k8s-pod-network.13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Workload="localhost-k8s-goldmane--54d579b49d--k7qc4-eth0" Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.959 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:36.966681 containerd[1469]: 2025-09-05 00:21:36.962 [INFO][5623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d" Sep 5 00:21:36.966681 containerd[1469]: time="2025-09-05T00:21:36.966620936Z" level=info msg="TearDown network for sandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\" successfully" Sep 5 00:21:36.974920 containerd[1469]: time="2025-09-05T00:21:36.974856524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:36.975091 containerd[1469]: time="2025-09-05T00:21:36.974950970Z" level=info msg="RemovePodSandbox \"13acee911c19952b9970ca288be24dec3d2d3c82a7287f455b9c9793bffc701d\" returns successfully" Sep 5 00:21:36.975491 containerd[1469]: time="2025-09-05T00:21:36.975429825Z" level=info msg="StopPodSandbox for \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\"" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.023 [WARNING][5652] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f6n82-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50", Pod:"coredns-674b8bbfcf-f6n82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b8478697b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.026 [INFO][5652] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.026 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" iface="eth0" netns="" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.026 [INFO][5652] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.026 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.052 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.052 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.052 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.061 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.061 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.062 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.070059 containerd[1469]: 2025-09-05 00:21:37.067 [INFO][5652] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.070915 containerd[1469]: time="2025-09-05T00:21:37.070096030Z" level=info msg="TearDown network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\" successfully" Sep 5 00:21:37.070915 containerd[1469]: time="2025-09-05T00:21:37.070124012Z" level=info msg="StopPodSandbox for \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\" returns successfully" Sep 5 00:21:37.070915 containerd[1469]: time="2025-09-05T00:21:37.070688459Z" level=info msg="RemovePodSandbox for \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\"" Sep 5 00:21:37.070915 containerd[1469]: time="2025-09-05T00:21:37.070711612Z" level=info msg="Forcibly stopping sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\"" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.107 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f6n82-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8d3e7fa-d9cb-474c-a80c-88c1ff6063e3", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5500686d4842ab18953c3b74562a2568c78acef9ef5ff1a5601d79f750441a50", Pod:"coredns-674b8bbfcf-f6n82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b8478697b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.107 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.107 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" iface="eth0" netns="" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.107 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.107 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.131 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.131 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.132 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.138 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.138 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" HandleID="k8s-pod-network.6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Workload="localhost-k8s-coredns--674b8bbfcf--f6n82-eth0" Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.139 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.145320 containerd[1469]: 2025-09-05 00:21:37.142 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d" Sep 5 00:21:37.146055 containerd[1469]: time="2025-09-05T00:21:37.145356719Z" level=info msg="TearDown network for sandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\" successfully" Sep 5 00:21:37.150088 containerd[1469]: time="2025-09-05T00:21:37.150061527Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:37.150140 containerd[1469]: time="2025-09-05T00:21:37.150117141Z" level=info msg="RemovePodSandbox \"6075a57befb89deac0731ed0688763860dbfe4286b1abb794d4eebc146d4546d\" returns successfully" Sep 5 00:21:37.150947 containerd[1469]: time="2025-09-05T00:21:37.150677891Z" level=info msg="StopPodSandbox for \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\"" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.186 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ps2kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c21203f-d0d0-4ce5-8193-33854aaca356", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d", Pod:"csi-node-driver-ps2kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e3a140bd34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.186 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.186 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" iface="eth0" netns="" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.187 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.187 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.213 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.214 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.214 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.220 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.220 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.221 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.227852 containerd[1469]: 2025-09-05 00:21:37.224 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.229876 containerd[1469]: time="2025-09-05T00:21:37.227788773Z" level=info msg="TearDown network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\" successfully" Sep 5 00:21:37.229876 containerd[1469]: time="2025-09-05T00:21:37.229874558Z" level=info msg="StopPodSandbox for \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\" returns successfully" Sep 5 00:21:37.230318 containerd[1469]: time="2025-09-05T00:21:37.230296047Z" level=info msg="RemovePodSandbox for \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\"" Sep 5 00:21:37.230367 containerd[1469]: time="2025-09-05T00:21:37.230324921Z" level=info msg="Forcibly stopping sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\"" Sep 5 00:21:37.279863 containerd[1469]: time="2025-09-05T00:21:37.279819142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:37.281914 containerd[1469]: time="2025-09-05T00:21:37.281657383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:21:37.283233 containerd[1469]: time="2025-09-05T00:21:37.283187518Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:37.285813 containerd[1469]: time="2025-09-05T00:21:37.285768540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:37.287275 containerd[1469]: time="2025-09-05T00:21:37.286768582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.149213102s" Sep 5 00:21:37.287275 containerd[1469]: time="2025-09-05T00:21:37.286817584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:21:37.291110 containerd[1469]: time="2025-09-05T00:21:37.291088659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:21:37.294590 containerd[1469]: time="2025-09-05T00:21:37.294557704Z" level=info msg="CreateContainer within sandbox \"8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:21:37.311579 containerd[1469]: time="2025-09-05T00:21:37.311521272Z" level=info msg="CreateContainer within sandbox \"8a9872bf061a0fa7becccab90abbe923cf4538b59f0a585b863ff3cf22935d1a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2eb4eb02832898ec8f6786b986f0ead400402a0ce060c9961da76b3f7b09cd97\"" Sep 5 00:21:37.312959 containerd[1469]: time="2025-09-05T00:21:37.312933035Z" level=info msg="StartContainer for \"2eb4eb02832898ec8f6786b986f0ead400402a0ce060c9961da76b3f7b09cd97\"" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.272 [WARNING][5730] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ps2kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c21203f-d0d0-4ce5-8193-33854aaca356", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d", Pod:"csi-node-driver-ps2kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e3a140bd34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.272 [INFO][5730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.272 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" iface="eth0" netns="" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.272 [INFO][5730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.272 [INFO][5730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.304 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.304 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.304 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.310 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.310 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" HandleID="k8s-pod-network.53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Workload="localhost-k8s-csi--node--driver--ps2kf-eth0" Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.314 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.320429 containerd[1469]: 2025-09-05 00:21:37.317 [INFO][5730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da" Sep 5 00:21:37.320866 containerd[1469]: time="2025-09-05T00:21:37.320462591Z" level=info msg="TearDown network for sandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\" successfully" Sep 5 00:21:37.355428 containerd[1469]: time="2025-09-05T00:21:37.355099597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:37.355428 containerd[1469]: time="2025-09-05T00:21:37.355266911Z" level=info msg="RemovePodSandbox \"53ab105f47c6c9bee0aed53b9d41d495c2dcb936c01981769b4e836d8ebc21da\" returns successfully" Sep 5 00:21:37.356744 containerd[1469]: time="2025-09-05T00:21:37.355986107Z" level=info msg="StopPodSandbox for \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\"" Sep 5 00:21:37.396971 systemd[1]: Started cri-containerd-2eb4eb02832898ec8f6786b986f0ead400402a0ce060c9961da76b3f7b09cd97.scope - libcontainer container 2eb4eb02832898ec8f6786b986f0ead400402a0ce060c9961da76b3f7b09cd97. Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.400 [WARNING][5763] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0", GenerateName:"calico-kube-controllers-7db846b456-", Namespace:"calico-system", SelfLink:"", UID:"e434d84b-5a4e-4e9f-be35-307612a171bb", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db846b456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3", Pod:"calico-kube-controllers-7db846b456-d8kdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33bb3336e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.400 [INFO][5763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.400 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" iface="eth0" netns="" Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.400 [INFO][5763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.400 [INFO][5763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.424 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.424 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.425 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.431 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.431 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.433 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.444058 containerd[1469]: 2025-09-05 00:21:37.438 [INFO][5763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.444058 containerd[1469]: time="2025-09-05T00:21:37.443933667Z" level=info msg="TearDown network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\" successfully" Sep 5 00:21:37.444058 containerd[1469]: time="2025-09-05T00:21:37.443958143Z" level=info msg="StopPodSandbox for \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\" returns successfully" Sep 5 00:21:37.446766 containerd[1469]: time="2025-09-05T00:21:37.446191363Z" level=info msg="RemovePodSandbox for \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\"" Sep 5 00:21:37.446766 containerd[1469]: time="2025-09-05T00:21:37.446232501Z" level=info msg="Forcibly stopping sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\"" Sep 5 00:21:37.460913 containerd[1469]: time="2025-09-05T00:21:37.460887187Z" level=info msg="StartContainer for \"2eb4eb02832898ec8f6786b986f0ead400402a0ce060c9961da76b3f7b09cd97\" returns successfully" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.493 [WARNING][5823] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0", GenerateName:"calico-kube-controllers-7db846b456-", Namespace:"calico-system", SelfLink:"", UID:"e434d84b-5a4e-4e9f-be35-307612a171bb", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db846b456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7063ada488df2fe47de55d11eecf7334e5b982f14ee93e15ec689ef9274362e3", Pod:"calico-kube-controllers-7db846b456-d8kdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33bb3336e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.494 [INFO][5823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.494 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" iface="eth0" netns="" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.494 [INFO][5823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.494 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.521 [INFO][5835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.521 [INFO][5835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.521 [INFO][5835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.529 [WARNING][5835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.529 [INFO][5835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" HandleID="k8s-pod-network.79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Workload="localhost-k8s-calico--kube--controllers--7db846b456--d8kdg-eth0" Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.531 [INFO][5835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.537018 containerd[1469]: 2025-09-05 00:21:37.534 [INFO][5823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5" Sep 5 00:21:37.537479 containerd[1469]: time="2025-09-05T00:21:37.537079860Z" level=info msg="TearDown network for sandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\" successfully" Sep 5 00:21:37.541925 containerd[1469]: time="2025-09-05T00:21:37.541881660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:37.542084 containerd[1469]: time="2025-09-05T00:21:37.541949006Z" level=info msg="RemovePodSandbox \"79ed332221b765cf537c7d7c7d8b6483d3ba293e05d892db887d51eb010ab7f5\" returns successfully" Sep 5 00:21:37.542495 containerd[1469]: time="2025-09-05T00:21:37.542455965Z" level=info msg="StopPodSandbox for \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\"" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.583 [WARNING][5852] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lntfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4cebde53-a50f-40b3-8246-ec9a25623456", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f", Pod:"coredns-674b8bbfcf-lntfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bfa36f353a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.584 [INFO][5852] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.584 [INFO][5852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" iface="eth0" netns="" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.584 [INFO][5852] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.584 [INFO][5852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.605 [INFO][5861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.605 [INFO][5861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.605 [INFO][5861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.611 [WARNING][5861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.611 [INFO][5861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.612 [INFO][5861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.618296 containerd[1469]: 2025-09-05 00:21:37.615 [INFO][5852] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.619484 containerd[1469]: time="2025-09-05T00:21:37.618321826Z" level=info msg="TearDown network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\" successfully" Sep 5 00:21:37.619484 containerd[1469]: time="2025-09-05T00:21:37.618347544Z" level=info msg="StopPodSandbox for \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\" returns successfully" Sep 5 00:21:37.619484 containerd[1469]: time="2025-09-05T00:21:37.618896602Z" level=info msg="RemovePodSandbox for \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\"" Sep 5 00:21:37.619484 containerd[1469]: time="2025-09-05T00:21:37.618927771Z" level=info msg="Forcibly stopping sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\"" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.654 [WARNING][5878] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lntfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4cebde53-a50f-40b3-8246-ec9a25623456", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8ac8580ff0786ee70aba825438467861738f92164b44c00fe5b8d28b8c0985f", Pod:"coredns-674b8bbfcf-lntfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bfa36f353a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.655 [INFO][5878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.655 [INFO][5878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" iface="eth0" netns="" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.655 [INFO][5878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.655 [INFO][5878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.678 [INFO][5887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.678 [INFO][5887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.678 [INFO][5887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.685 [WARNING][5887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.685 [INFO][5887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" HandleID="k8s-pod-network.6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Workload="localhost-k8s-coredns--674b8bbfcf--lntfn-eth0" Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.687 [INFO][5887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.693145 containerd[1469]: 2025-09-05 00:21:37.690 [INFO][5878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811" Sep 5 00:21:37.693577 containerd[1469]: time="2025-09-05T00:21:37.693190120Z" level=info msg="TearDown network for sandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\" successfully" Sep 5 00:21:37.697280 containerd[1469]: time="2025-09-05T00:21:37.697225235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:37.697280 containerd[1469]: time="2025-09-05T00:21:37.697288754Z" level=info msg="RemovePodSandbox \"6ce1bcd4b1349d3c6c72338e2ca9809ecf22e9a0c178f8ccf43b17fb86a80811\" returns successfully" Sep 5 00:21:37.697757 containerd[1469]: time="2025-09-05T00:21:37.697732184Z" level=info msg="StopPodSandbox for \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\"" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.732 [WARNING][5905] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" WorkloadEndpoint="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.732 [INFO][5905] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.732 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" iface="eth0" netns="" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.732 [INFO][5905] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.732 [INFO][5905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.756 [INFO][5914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.756 [INFO][5914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.756 [INFO][5914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.762 [WARNING][5914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.763 [INFO][5914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.764 [INFO][5914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.769989 containerd[1469]: 2025-09-05 00:21:37.767 [INFO][5905] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.770452 containerd[1469]: time="2025-09-05T00:21:37.770006041Z" level=info msg="TearDown network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\" successfully" Sep 5 00:21:37.770452 containerd[1469]: time="2025-09-05T00:21:37.770037790Z" level=info msg="StopPodSandbox for \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\" returns successfully" Sep 5 00:21:37.770524 containerd[1469]: time="2025-09-05T00:21:37.770501098Z" level=info msg="RemovePodSandbox for \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\"" Sep 5 00:21:37.770554 containerd[1469]: time="2025-09-05T00:21:37.770528119Z" level=info msg="Forcibly stopping sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\"" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.803 [WARNING][5931] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" WorkloadEndpoint="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.803 [INFO][5931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.803 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" iface="eth0" netns="" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.803 [INFO][5931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.803 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.826 [INFO][5939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.826 [INFO][5939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.826 [INFO][5939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.832 [WARNING][5939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.832 [INFO][5939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" HandleID="k8s-pod-network.3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Workload="localhost-k8s-whisker--78bc7854b7--rl9jb-eth0" Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.834 [INFO][5939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.841125 containerd[1469]: 2025-09-05 00:21:37.837 [INFO][5931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024" Sep 5 00:21:37.841125 containerd[1469]: time="2025-09-05T00:21:37.841079380Z" level=info msg="TearDown network for sandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\" successfully" Sep 5 00:21:37.851919 containerd[1469]: time="2025-09-05T00:21:37.851867598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:37.851984 containerd[1469]: time="2025-09-05T00:21:37.851947056Z" level=info msg="RemovePodSandbox \"3faadcae269bcd5e0f6b664bed0105ba1ee57819d6f8fc9e2cf66898bd2fd024\" returns successfully" Sep 5 00:21:37.852524 containerd[1469]: time="2025-09-05T00:21:37.852473772Z" level=info msg="StopPodSandbox for \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\"" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.889 [WARNING][5957] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6875358a-dc29-4093-b0ab-9707efb59b86", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a", Pod:"calico-apiserver-575c74c6d4-v8wf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2386624047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.889 [INFO][5957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.890 [INFO][5957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" iface="eth0" netns="" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.890 [INFO][5957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.890 [INFO][5957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.917 [INFO][5965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.917 [INFO][5965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.917 [INFO][5965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.922 [WARNING][5965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.923 [INFO][5965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.924 [INFO][5965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:37.930762 containerd[1469]: 2025-09-05 00:21:37.927 [INFO][5957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:37.931319 containerd[1469]: time="2025-09-05T00:21:37.930811061Z" level=info msg="TearDown network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\" successfully" Sep 5 00:21:37.931319 containerd[1469]: time="2025-09-05T00:21:37.930837631Z" level=info msg="StopPodSandbox for \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\" returns successfully" Sep 5 00:21:37.931486 containerd[1469]: time="2025-09-05T00:21:37.931428837Z" level=info msg="RemovePodSandbox for \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\"" Sep 5 00:21:37.931486 containerd[1469]: time="2025-09-05T00:21:37.931485554Z" level=info msg="Forcibly stopping sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\"" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:37.980 [WARNING][5983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0", GenerateName:"calico-apiserver-575c74c6d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6875358a-dc29-4093-b0ab-9707efb59b86", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575c74c6d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84d0e8878926d09890df3c21a33dade5a080f86f97855346c574dfb7ff2e539a", Pod:"calico-apiserver-575c74c6d4-v8wf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2386624047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:37.980 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:37.980 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" iface="eth0" netns="" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:37.980 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:37.980 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:38.007 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:38.007 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:38.007 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:38.013 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:38.013 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" HandleID="k8s-pod-network.3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Workload="localhost-k8s-calico--apiserver--575c74c6d4--v8wf5-eth0" Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:38.015 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:21:38.022252 containerd[1469]: 2025-09-05 00:21:38.018 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034" Sep 5 00:21:38.022252 containerd[1469]: time="2025-09-05T00:21:38.022084375Z" level=info msg="TearDown network for sandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\" successfully" Sep 5 00:21:38.054456 containerd[1469]: time="2025-09-05T00:21:38.054373792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:21:38.054661 containerd[1469]: time="2025-09-05T00:21:38.054522811Z" level=info msg="RemovePodSandbox \"3ddfc90dd0a4ac5b198f3976cfc60e9514d518c7c5ff49396fc3c838d1d8e034\" returns successfully" Sep 5 00:21:38.059667 kubelet[2562]: I0905 00:21:38.059115 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-k7qc4" podStartSLOduration=31.918874604 podStartE2EDuration="45.059097406s" podCreationTimestamp="2025-09-05 00:20:53 +0000 UTC" firstStartedPulling="2025-09-05 00:21:24.149954011 +0000 UTC m=+47.931646742" lastFinishedPulling="2025-09-05 00:21:37.290176813 +0000 UTC m=+61.071869544" observedRunningTime="2025-09-05 00:21:38.058522258 +0000 UTC m=+61.840214989" watchObservedRunningTime="2025-09-05 00:21:38.059097406 +0000 UTC m=+61.840790127" Sep 5 00:21:38.871654 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:43952.service - OpenSSH per-connection server daemon (10.0.0.1:43952). Sep 5 00:21:39.015712 sshd[6024]: Accepted publickey for core from 10.0.0.1 port 43952 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:39.018500 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:39.030751 systemd-logind[1451]: New session 16 of user core. Sep 5 00:21:39.041551 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:21:39.096455 systemd[1]: run-containerd-runc-k8s.io-2eb4eb02832898ec8f6786b986f0ead400402a0ce060c9961da76b3f7b09cd97-runc.a2gp72.mount: Deactivated successfully. Sep 5 00:21:39.375559 sshd[6024]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:39.380068 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:43952.service: Deactivated successfully. Sep 5 00:21:39.382507 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:21:39.383219 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:21:39.384546 systemd-logind[1451]: Removed session 16. Sep 5 00:21:40.711720 containerd[1469]: time="2025-09-05T00:21:40.711665075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:40.714009 containerd[1469]: time="2025-09-05T00:21:40.713953845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:21:40.715727 containerd[1469]: time="2025-09-05T00:21:40.715689547Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:40.721162 containerd[1469]: time="2025-09-05T00:21:40.720514123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:40.721162 containerd[1469]: time="2025-09-05T00:21:40.720966323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.429849541s" Sep 5 00:21:40.721162 containerd[1469]: time="2025-09-05T00:21:40.721001679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:21:40.732454 containerd[1469]: time="2025-09-05T00:21:40.732395167Z" level=info msg="CreateContainer within sandbox \"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:21:41.356909 containerd[1469]: time="2025-09-05T00:21:41.356841793Z" level=info msg="CreateContainer within sandbox \"93554471090c54faa4dbcb02ee0643b23e0aa36832ed32efa78f8a36879ed83d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2cb91a6f745c633619a775995c7676e66bc9b9745955450543a93bca734ae36a\"" Sep 5 00:21:41.357452 containerd[1469]: time="2025-09-05T00:21:41.357390786Z" level=info msg="StartContainer for \"2cb91a6f745c633619a775995c7676e66bc9b9745955450543a93bca734ae36a\"" Sep 5 00:21:41.390966 systemd[1]: Started cri-containerd-2cb91a6f745c633619a775995c7676e66bc9b9745955450543a93bca734ae36a.scope - libcontainer container 2cb91a6f745c633619a775995c7676e66bc9b9745955450543a93bca734ae36a. Sep 5 00:21:41.573797 containerd[1469]: time="2025-09-05T00:21:41.573723550Z" level=info msg="StartContainer for \"2cb91a6f745c633619a775995c7676e66bc9b9745955450543a93bca734ae36a\" returns successfully" Sep 5 00:21:42.481614 kubelet[2562]: I0905 00:21:42.481560 2562 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:21:42.482809 kubelet[2562]: I0905 00:21:42.482793 2562 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:21:43.657990 kubelet[2562]: I0905 00:21:43.657936 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:21:43.673396 kubelet[2562]: I0905 00:21:43.673314 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ps2kf" podStartSLOduration=31.070363292 podStartE2EDuration="49.673292999s" podCreationTimestamp="2025-09-05 00:20:54 +0000 UTC" firstStartedPulling="2025-09-05 00:21:22.120235167 +0000 UTC m=+45.901927898" lastFinishedPulling="2025-09-05 00:21:40.723164874 +0000 UTC m=+64.504857605" observedRunningTime="2025-09-05 00:21:42.105555819 +0000 UTC m=+65.887248570" watchObservedRunningTime="2025-09-05 00:21:43.673292999 +0000 UTC m=+67.454985730" Sep 5 00:21:44.411651 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:33140.service - OpenSSH per-connection server daemon (10.0.0.1:33140). Sep 5 00:21:44.602923 sshd[6109]: Accepted publickey for core from 10.0.0.1 port 33140 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:44.616259 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:44.632512 systemd-logind[1451]: New session 17 of user core. Sep 5 00:21:44.639140 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:21:45.315616 sshd[6109]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:45.329661 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:33140.service: Deactivated successfully. Sep 5 00:21:45.334424 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:21:45.341127 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:21:45.344798 systemd-logind[1451]: Removed session 17. Sep 5 00:21:50.320024 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:36108.service - OpenSSH per-connection server daemon (10.0.0.1:36108). Sep 5 00:21:50.354873 sshd[6175]: Accepted publickey for core from 10.0.0.1 port 36108 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:50.356580 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:50.360538 systemd-logind[1451]: New session 18 of user core. Sep 5 00:21:50.369917 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:21:50.536688 sshd[6175]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:50.541112 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:36108.service: Deactivated successfully. Sep 5 00:21:50.543571 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:21:50.544318 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:21:50.545441 systemd-logind[1451]: Removed session 18. Sep 5 00:21:55.552995 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:36118.service - OpenSSH per-connection server daemon (10.0.0.1:36118). Sep 5 00:21:55.626719 sshd[6190]: Accepted publickey for core from 10.0.0.1 port 36118 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:21:55.629221 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:55.633968 systemd-logind[1451]: New session 19 of user core. Sep 5 00:21:55.643962 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:21:55.973938 sshd[6190]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:55.979252 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:36118.service: Deactivated successfully. Sep 5 00:21:55.981461 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:21:55.982133 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:21:55.983117 systemd-logind[1451]: Removed session 19. Sep 5 00:21:58.767573 kernel: hrtimer: interrupt took 4008151 ns Sep 5 00:22:01.021114 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:43496.service - OpenSSH per-connection server daemon (10.0.0.1:43496). Sep 5 00:22:01.132972 sshd[6205]: Accepted publickey for core from 10.0.0.1 port 43496 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:01.136179 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:01.154515 systemd-logind[1451]: New session 20 of user core. Sep 5 00:22:01.182000 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:22:01.561144 sshd[6205]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:01.579651 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:43496.service: Deactivated successfully. Sep 5 00:22:01.582622 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:22:01.584980 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:22:01.599435 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:43498.service - OpenSSH per-connection server daemon (10.0.0.1:43498). Sep 5 00:22:01.600884 systemd-logind[1451]: Removed session 20. Sep 5 00:22:01.633437 sshd[6223]: Accepted publickey for core from 10.0.0.1 port 43498 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:01.635486 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:01.641472 systemd-logind[1451]: New session 21 of user core. Sep 5 00:22:01.652198 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:22:02.129206 sshd[6223]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:02.137964 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:43498.service: Deactivated successfully. Sep 5 00:22:02.139996 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:22:02.140828 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:22:02.152184 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:43514.service - OpenSSH per-connection server daemon (10.0.0.1:43514). Sep 5 00:22:02.153285 systemd-logind[1451]: Removed session 21. Sep 5 00:22:02.188705 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 43514 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:02.191234 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:02.196391 systemd-logind[1451]: New session 22 of user core. Sep 5 00:22:02.209089 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:22:02.773582 sshd[6235]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:02.785617 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:43514.service: Deactivated successfully. Sep 5 00:22:02.791901 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:22:02.796585 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:22:02.805273 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:43526.service - OpenSSH per-connection server daemon (10.0.0.1:43526). Sep 5 00:22:02.806761 systemd-logind[1451]: Removed session 22. Sep 5 00:22:02.838716 sshd[6255]: Accepted publickey for core from 10.0.0.1 port 43526 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:02.841010 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:02.845575 systemd-logind[1451]: New session 23 of user core. Sep 5 00:22:02.854069 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:22:03.377554 sshd[6255]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:03.389946 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:43526.service: Deactivated successfully. Sep 5 00:22:03.394653 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:22:03.396590 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:22:03.403088 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:43530.service - OpenSSH per-connection server daemon (10.0.0.1:43530). Sep 5 00:22:03.404317 systemd-logind[1451]: Removed session 23. Sep 5 00:22:03.439933 sshd[6267]: Accepted publickey for core from 10.0.0.1 port 43530 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:03.441616 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:03.445957 systemd-logind[1451]: New session 24 of user core. Sep 5 00:22:03.456063 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:22:03.591841 sshd[6267]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:03.596538 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:43530.service: Deactivated successfully. Sep 5 00:22:03.599327 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:22:03.600355 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:22:03.601418 systemd-logind[1451]: Removed session 24. Sep 5 00:22:05.387953 kubelet[2562]: E0905 00:22:05.387756 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:08.606427 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:43540.service - OpenSSH per-connection server daemon (10.0.0.1:43540). Sep 5 00:22:08.644034 sshd[6309]: Accepted publickey for core from 10.0.0.1 port 43540 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:08.646891 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:08.652221 systemd-logind[1451]: New session 25 of user core. Sep 5 00:22:08.666041 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:22:08.785505 sshd[6309]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:08.789703 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:43540.service: Deactivated successfully. Sep 5 00:22:08.791758 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:22:08.792473 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:22:08.793832 systemd-logind[1451]: Removed session 25. Sep 5 00:22:09.324050 kubelet[2562]: E0905 00:22:09.323334 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:10.320862 kubelet[2562]: E0905 00:22:10.320756 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:13.806223 systemd[1]: Started sshd@25-10.0.0.128:22-10.0.0.1:50482.service - OpenSSH per-connection server daemon (10.0.0.1:50482). Sep 5 00:22:13.840471 sshd[6351]: Accepted publickey for core from 10.0.0.1 port 50482 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:13.842407 sshd[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:13.846598 systemd-logind[1451]: New session 26 of user core. Sep 5 00:22:13.853957 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:22:13.963202 sshd[6351]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:13.967201 systemd[1]: sshd@25-10.0.0.128:22-10.0.0.1:50482.service: Deactivated successfully. Sep 5 00:22:13.969528 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:22:13.970255 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:22:13.971202 systemd-logind[1451]: Removed session 26. Sep 5 00:22:17.322626 kubelet[2562]: E0905 00:22:17.322585 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:18.975814 systemd[1]: Started sshd@26-10.0.0.128:22-10.0.0.1:50484.service - OpenSSH per-connection server daemon (10.0.0.1:50484). Sep 5 00:22:19.033801 sshd[6366]: Accepted publickey for core from 10.0.0.1 port 50484 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:22:19.036161 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:19.042181 systemd-logind[1451]: New session 27 of user core. Sep 5 00:22:19.051182 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:22:19.356892 sshd[6366]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:19.369167 systemd[1]: sshd@26-10.0.0.128:22-10.0.0.1:50484.service: Deactivated successfully. Sep 5 00:22:19.374242 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:22:19.379977 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:22:19.381500 systemd-logind[1451]: Removed session 27.